Test Report: Docker_Linux_crio_arm64 21894

                    
                      8496c1ca7722bf7d926446d0df8cf9af55d7419f:2025-11-15:42336
                    
                

Test fail (41/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.34
35 TestAddons/parallel/Registry 15.28
36 TestAddons/parallel/RegistryCreds 0.48
37 TestAddons/parallel/Ingress 145.32
38 TestAddons/parallel/InspektorGadget 6.29
39 TestAddons/parallel/MetricsServer 5.38
41 TestAddons/parallel/CSI 38.91
42 TestAddons/parallel/Headlamp 3.29
43 TestAddons/parallel/CloudSpanner 6.28
44 TestAddons/parallel/LocalPath 8.5
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 5.27
97 TestFunctional/parallel/ServiceCmdConnect 603.57
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.86
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
135 TestFunctional/parallel/ServiceCmd/Format 0.45
136 TestFunctional/parallel/ServiceCmd/URL 0.53
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.96
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.45
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
171 TestMultiControlPlane/serial/RestartSecondaryNode 521.68
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.69
177 TestMultiControlPlane/serial/RestartCluster 366.57
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.35
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.02
191 TestJSONOutput/pause/Command 2.5
197 TestJSONOutput/unpause/Command 1.59
282 TestPause/serial/Pause 7.03
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.53
304 TestStartStop/group/old-k8s-version/serial/Pause 6.65
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.53
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.61
322 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.94
328 TestStartStop/group/embed-certs/serial/Pause 8.14
332 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.08
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.49
344 TestStartStop/group/newest-cni/serial/Pause 8.14
349 TestStartStop/group/no-preload/serial/Pause 7.03
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable volcano --alsologtostderr -v=1: exit status 11 (336.154343ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:34:19.391343  593211 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:19.392452  593211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:19.392473  593211 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:19.392479  593211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:19.392759  593211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:34:19.393101  593211 mustload.go:66] Loading cluster: addons-800763
	I1115 10:34:19.393566  593211 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:19.393588  593211 addons.go:607] checking whether the cluster is paused
	I1115 10:34:19.393708  593211 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:19.393726  593211 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:34:19.394181  593211 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:34:19.430413  593211 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:19.430478  593211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:34:19.459864  593211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:34:19.567579  593211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:19.567718  593211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:19.604445  593211 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:34:19.604467  593211 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:34:19.604472  593211 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:34:19.604476  593211 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:34:19.604479  593211 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:34:19.604486  593211 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:34:19.604489  593211 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:34:19.604492  593211 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:34:19.604495  593211 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:34:19.604501  593211 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:34:19.604504  593211 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:34:19.604507  593211 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:34:19.604511  593211 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:34:19.604514  593211 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:34:19.604517  593211 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:34:19.604521  593211 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:34:19.604525  593211 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:34:19.604529  593211 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:34:19.604532  593211 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:34:19.604535  593211 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:34:19.604540  593211 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:34:19.604544  593211 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:34:19.604547  593211 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:34:19.604550  593211 cri.go:89] found id: ""
	I1115 10:34:19.604602  593211 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:34:19.621417  593211 out.go:203] 
	W1115 10:34:19.624682  593211 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:34:19.624718  593211 out.go:285] * 
	* 
	W1115 10:34:19.641199  593211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:34:19.644496  593211 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.054722ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003406534s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003864667s
addons_test.go:392: (dbg) Run:  kubectl --context addons-800763 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-800763 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-800763 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.706356631s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 ip
2025/11/15 10:34:45 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable registry --alsologtostderr -v=1: exit status 11 (292.054907ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:34:45.970688  594155 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:45.971533  594155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:45.971568  594155 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:45.971606  594155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:45.972006  594155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:34:45.972446  594155 mustload.go:66] Loading cluster: addons-800763
	I1115 10:34:45.973119  594155 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:45.973224  594155 addons.go:607] checking whether the cluster is paused
	I1115 10:34:45.973366  594155 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:45.973412  594155 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:34:45.973981  594155 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:34:45.999019  594155 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:45.999076  594155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:34:46.021059  594155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:34:46.133037  594155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:46.133194  594155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:46.174212  594155 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:34:46.174289  594155 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:34:46.174315  594155 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:34:46.174334  594155 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:34:46.174362  594155 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:34:46.174378  594155 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:34:46.174404  594155 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:34:46.174423  594155 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:34:46.174452  594155 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:34:46.174472  594155 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:34:46.174489  594155 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:34:46.174507  594155 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:34:46.174537  594155 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:34:46.174555  594155 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:34:46.174574  594155 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:34:46.174594  594155 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:34:46.174631  594155 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:34:46.174650  594155 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:34:46.174668  594155 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:34:46.174688  594155 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:34:46.174722  594155 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:34:46.174747  594155 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:34:46.174765  594155 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:34:46.174782  594155 cri.go:89] found id: ""
	I1115 10:34:46.174860  594155 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:34:46.196130  594155 out.go:203] 
	W1115 10:34:46.199120  594155 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:34:46.199154  594155 out.go:285] * 
	* 
	W1115 10:34:46.205736  594155 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:34:46.208892  594155 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.28s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.035519ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-800763
addons_test.go:332: (dbg) Run:  kubectl --context addons-800763 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.229286ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:13.394362  595202 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:13.395165  595202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:13.395179  595202 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:13.395184  595202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:13.395785  595202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:35:13.396645  595202 mustload.go:66] Loading cluster: addons-800763
	I1115 10:35:13.397620  595202 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:13.397694  595202 addons.go:607] checking whether the cluster is paused
	I1115 10:35:13.397896  595202 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:13.397933  595202 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:35:13.398719  595202 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:35:13.417929  595202 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:13.418002  595202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:35:13.436205  595202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:35:13.543549  595202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:13.543633  595202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:13.578920  595202 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:35:13.578942  595202 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:35:13.578947  595202 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:35:13.578951  595202 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:35:13.578954  595202 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:35:13.578958  595202 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:35:13.578962  595202 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:35:13.578965  595202 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:35:13.578970  595202 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:35:13.578977  595202 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:35:13.578981  595202 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:35:13.578984  595202 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:35:13.578987  595202 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:35:13.578990  595202 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:35:13.578998  595202 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:35:13.579007  595202 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:35:13.579011  595202 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:35:13.579016  595202 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:35:13.579019  595202 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:35:13.579022  595202 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:35:13.579026  595202 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:35:13.579033  595202 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:35:13.579036  595202 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:35:13.579039  595202 cri.go:89] found id: ""
	I1115 10:35:13.579091  595202 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:13.594675  595202 out.go:203] 
	W1115 10:35:13.597622  595202 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:13.597654  595202 out.go:285] * 
	* 
	W1115 10:35:13.603348  595202 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:13.606855  595202 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-800763 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-800763 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-800763 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d63278c4-1527-4b52-9000-a249f760d9be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d63278c4-1527-4b52-9000-a249f760d9be] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00310381s
I1115 10:35:08.573143  586561 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.622428771s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-800763 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-800763
helpers_test.go:243: (dbg) docker inspect addons-800763:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450",
	        "Created": "2025-11-15T10:32:03.5118468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:32:03.580273666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/hostname",
	        "HostsPath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/hosts",
	        "LogPath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450-json.log",
	        "Name": "/addons-800763",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-800763:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-800763",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450",
	                "LowerDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-800763",
	                "Source": "/var/lib/docker/volumes/addons-800763/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-800763",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-800763",
	                "name.minikube.sigs.k8s.io": "addons-800763",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b871a302261ad83ac50b6e0e0624dd37e10bcad8ef4b3002c71c77a96a6ce618",
	            "SandboxKey": "/var/run/docker/netns/b871a302261a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-800763": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:62:f9:92:c8:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d68a6b13710afe5f0b1c96904b827fbb9442383b2ff3417bc4aa15f1ca8ad42e",
	                    "EndpointID": "be62c7da60fe1d40d6ad7b2a7985994151bbbcd09d99d017e889f738ffe7d8e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-800763",
	                        "b45b50a37343"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-800763 -n addons-800763
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-800763 logs -n 25: (1.433791248s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-855751                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-855751 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ start   │ --download-only -p binary-mirror-014145 --alsologtostderr --binary-mirror http://127.0.0.1:33887 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-014145   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ delete  │ -p binary-mirror-014145                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-014145   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable dashboard -p addons-800763                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ addons  │ disable dashboard -p addons-800763                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ start   │ -p addons-800763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:34 UTC │
	│ addons  │ addons-800763 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ addons-800763 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable headlamp -p addons-800763 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ addons-800763 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ip      │ addons-800763 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:34 UTC │
	│ addons  │ addons-800763 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ addons-800763 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ addons-800763 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ ssh     │ addons-800763 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ addons  │ addons-800763 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ addons  │ addons-800763 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-800763                                                                                                                                                                                                                                                                                                                                                                                           │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ addons-800763 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ addons  │ addons-800763 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ addons  │ addons-800763 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ ssh     │ addons-800763 ssh cat /opt/local-path-provisioner/pvc-a60dc574-d334-43fd-b1ee-4958d621bb8e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ addons-800763 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ addons  │ addons-800763 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ ip      │ addons-800763 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:31:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:31:37.286687  587312 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:31:37.286915  587312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:37.286947  587312 out.go:374] Setting ErrFile to fd 2...
	I1115 10:31:37.286968  587312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:37.287236  587312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:31:37.287715  587312 out.go:368] Setting JSON to false
	I1115 10:31:37.288578  587312 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8048,"bootTime":1763194649,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:31:37.288676  587312 start.go:143] virtualization:  
	I1115 10:31:37.292002  587312 out.go:179] * [addons-800763] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:31:37.295880  587312 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:31:37.296007  587312 notify.go:221] Checking for updates...
	I1115 10:31:37.301811  587312 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:31:37.304749  587312 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:31:37.307587  587312 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:31:37.310504  587312 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:31:37.313330  587312 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:31:37.316334  587312 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:31:37.339913  587312 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:31:37.340029  587312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:37.405700  587312 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 10:31:37.396826605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:37.405814  587312 docker.go:319] overlay module found
	I1115 10:31:37.408979  587312 out.go:179] * Using the docker driver based on user configuration
	I1115 10:31:37.411735  587312 start.go:309] selected driver: docker
	I1115 10:31:37.411755  587312 start.go:930] validating driver "docker" against <nil>
	I1115 10:31:37.411771  587312 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:31:37.412518  587312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:37.464977  587312 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 10:31:37.455360743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:37.465141  587312 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:31:37.465384  587312 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:31:37.468270  587312 out.go:179] * Using Docker driver with root privileges
	I1115 10:31:37.471115  587312 cni.go:84] Creating CNI manager for ""
	I1115 10:31:37.471179  587312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:31:37.471192  587312 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:31:37.471264  587312 start.go:353] cluster config:
	{Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 10:31:37.474372  587312 out.go:179] * Starting "addons-800763" primary control-plane node in "addons-800763" cluster
	I1115 10:31:37.477098  587312 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:31:37.480102  587312 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:31:37.482984  587312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:31:37.483041  587312 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:31:37.483054  587312 cache.go:65] Caching tarball of preloaded images
	I1115 10:31:37.483064  587312 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:31:37.483140  587312 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:31:37.483154  587312 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:31:37.483489  587312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/config.json ...
	I1115 10:31:37.483518  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/config.json: {Name:mkf94e9d4ef8eeb627c4a5c077a1fd07c2af97b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:37.499704  587312 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 10:31:37.499831  587312 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 10:31:37.499850  587312 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 10:31:37.499854  587312 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 10:31:37.499862  587312 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 10:31:37.499867  587312 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 10:31:55.354260  587312 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 10:31:55.354300  587312 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:31:55.354339  587312 start.go:360] acquireMachinesLock for addons-800763: {Name:mkeeb6cf50ec492af8c3057917054764961dc2ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:31:55.354485  587312 start.go:364] duration metric: took 121.635µs to acquireMachinesLock for "addons-800763"
	I1115 10:31:55.354520  587312 start.go:93] Provisioning new machine with config: &{Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:31:55.354610  587312 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:31:55.358051  587312 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 10:31:55.358313  587312 start.go:159] libmachine.API.Create for "addons-800763" (driver="docker")
	I1115 10:31:55.358352  587312 client.go:173] LocalClient.Create starting
	I1115 10:31:55.358467  587312 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:31:55.473535  587312 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:31:56.318648  587312 cli_runner.go:164] Run: docker network inspect addons-800763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:31:56.335517  587312 cli_runner.go:211] docker network inspect addons-800763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:31:56.335631  587312 network_create.go:284] running [docker network inspect addons-800763] to gather additional debugging logs...
	I1115 10:31:56.335659  587312 cli_runner.go:164] Run: docker network inspect addons-800763
	W1115 10:31:56.351975  587312 cli_runner.go:211] docker network inspect addons-800763 returned with exit code 1
	I1115 10:31:56.352004  587312 network_create.go:287] error running [docker network inspect addons-800763]: docker network inspect addons-800763: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-800763 not found
	I1115 10:31:56.352019  587312 network_create.go:289] output of [docker network inspect addons-800763]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-800763 not found
	
	** /stderr **
	I1115 10:31:56.352129  587312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:31:56.368577  587312 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019afa50}
	I1115 10:31:56.368625  587312 network_create.go:124] attempt to create docker network addons-800763 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 10:31:56.368690  587312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-800763 addons-800763
	I1115 10:31:56.425254  587312 network_create.go:108] docker network addons-800763 192.168.49.0/24 created
	I1115 10:31:56.425288  587312 kic.go:121] calculated static IP "192.168.49.2" for the "addons-800763" container
	I1115 10:31:56.425361  587312 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:31:56.440738  587312 cli_runner.go:164] Run: docker volume create addons-800763 --label name.minikube.sigs.k8s.io=addons-800763 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:31:56.458783  587312 oci.go:103] Successfully created a docker volume addons-800763
	I1115 10:31:56.458873  587312 cli_runner.go:164] Run: docker run --rm --name addons-800763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-800763 --entrypoint /usr/bin/test -v addons-800763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:31:58.529630  587312 cli_runner.go:217] Completed: docker run --rm --name addons-800763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-800763 --entrypoint /usr/bin/test -v addons-800763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.070716474s)
	I1115 10:31:58.529663  587312 oci.go:107] Successfully prepared a docker volume addons-800763
	I1115 10:31:58.529720  587312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:31:58.529735  587312 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:31:58.529797  587312 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-800763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:32:03.436836  587312 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-800763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.906982785s)
	I1115 10:32:03.436887  587312 kic.go:203] duration metric: took 4.907146176s to extract preloaded images to volume ...
	W1115 10:32:03.437035  587312 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:32:03.437161  587312 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:32:03.496409  587312 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-800763 --name addons-800763 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-800763 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-800763 --network addons-800763 --ip 192.168.49.2 --volume addons-800763:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:32:03.795587  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Running}}
	I1115 10:32:03.815598  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:03.841313  587312 cli_runner.go:164] Run: docker exec addons-800763 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:32:03.891501  587312 oci.go:144] the created container "addons-800763" has a running status.
	I1115 10:32:03.891536  587312 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa...
	I1115 10:32:04.282914  587312 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:32:04.307305  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:04.338932  587312 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:32:04.338953  587312 kic_runner.go:114] Args: [docker exec --privileged addons-800763 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:32:04.400472  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:04.427095  587312 machine.go:94] provisionDockerMachine start ...
	I1115 10:32:04.427207  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:04.458334  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:04.458653  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:04.458663  587312 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:32:04.461063  587312 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44640->127.0.0.1:33509: read: connection reset by peer
	I1115 10:32:07.612377  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-800763
	
	I1115 10:32:07.612397  587312 ubuntu.go:182] provisioning hostname "addons-800763"
	I1115 10:32:07.612459  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:07.629751  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:07.630076  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:07.630093  587312 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-800763 && echo "addons-800763" | sudo tee /etc/hostname
	I1115 10:32:07.790030  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-800763
	
	I1115 10:32:07.790108  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:07.808194  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:07.808523  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:07.808540  587312 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-800763' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-800763/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-800763' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:32:07.961125  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:32:07.961157  587312 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:32:07.961184  587312 ubuntu.go:190] setting up certificates
	I1115 10:32:07.961200  587312 provision.go:84] configureAuth start
	I1115 10:32:07.961271  587312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-800763
	I1115 10:32:07.978195  587312 provision.go:143] copyHostCerts
	I1115 10:32:07.978278  587312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:32:07.978399  587312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:32:07.978467  587312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:32:07.978517  587312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.addons-800763 san=[127.0.0.1 192.168.49.2 addons-800763 localhost minikube]
	I1115 10:32:08.200513  587312 provision.go:177] copyRemoteCerts
	I1115 10:32:08.200581  587312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:32:08.200632  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.216823  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.320747  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:32:08.338332  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:32:08.355641  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:32:08.372090  587312 provision.go:87] duration metric: took 410.873505ms to configureAuth
	I1115 10:32:08.372131  587312 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:32:08.372338  587312 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:08.372440  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.389244  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:08.389561  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:08.389582  587312 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:32:08.652418  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:32:08.652439  587312 machine.go:97] duration metric: took 4.225315697s to provisionDockerMachine
	I1115 10:32:08.652449  587312 client.go:176] duration metric: took 13.294087815s to LocalClient.Create
	I1115 10:32:08.652464  587312 start.go:167] duration metric: took 13.294153194s to libmachine.API.Create "addons-800763"
	I1115 10:32:08.652471  587312 start.go:293] postStartSetup for "addons-800763" (driver="docker")
	I1115 10:32:08.652481  587312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:32:08.652559  587312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:32:08.652604  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.669410  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.772907  587312 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:32:08.776190  587312 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:32:08.776219  587312 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:32:08.776230  587312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:32:08.776292  587312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:32:08.776320  587312 start.go:296] duration metric: took 123.842909ms for postStartSetup
	I1115 10:32:08.776621  587312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-800763
	I1115 10:32:08.793293  587312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/config.json ...
	I1115 10:32:08.793580  587312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:32:08.793639  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.813566  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.917776  587312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:32:08.922216  587312 start.go:128] duration metric: took 13.567589009s to createHost
	I1115 10:32:08.922282  587312 start.go:83] releasing machines lock for "addons-800763", held for 13.56778641s
	I1115 10:32:08.922373  587312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-800763
	I1115 10:32:08.939243  587312 ssh_runner.go:195] Run: cat /version.json
	I1115 10:32:08.939292  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.939598  587312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:32:08.939671  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.958166  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.966553  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:09.148238  587312 ssh_runner.go:195] Run: systemctl --version
	I1115 10:32:09.154572  587312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:32:09.190097  587312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:32:09.194512  587312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:32:09.194602  587312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:32:09.224170  587312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:32:09.224208  587312 start.go:496] detecting cgroup driver to use...
	I1115 10:32:09.224265  587312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:32:09.224339  587312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:32:09.240223  587312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:32:09.252694  587312 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:32:09.252810  587312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:32:09.270093  587312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:32:09.288582  587312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:32:09.406534  587312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:32:09.533739  587312 docker.go:234] disabling docker service ...
	I1115 10:32:09.533882  587312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:32:09.555841  587312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:32:09.568767  587312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:32:09.686368  587312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:32:09.799151  587312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:32:09.812975  587312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:32:09.827839  587312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:32:09.827950  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.837490  587312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:32:09.837611  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.847407  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.856593  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.866118  587312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:32:09.873919  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.883218  587312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.897405  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.906201  587312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:32:09.913762  587312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:32:09.921148  587312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:10.031234  587312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:32:10.164600  587312 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:32:10.164686  587312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:32:10.168611  587312 start.go:564] Will wait 60s for crictl version
	I1115 10:32:10.168685  587312 ssh_runner.go:195] Run: which crictl
	I1115 10:32:10.172265  587312 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:32:10.197574  587312 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:32:10.197667  587312 ssh_runner.go:195] Run: crio --version
	I1115 10:32:10.229125  587312 ssh_runner.go:195] Run: crio --version
	I1115 10:32:10.259738  587312 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:32:10.262663  587312 cli_runner.go:164] Run: docker network inspect addons-800763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:32:10.278466  587312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:32:10.282416  587312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:10.292039  587312 kubeadm.go:884] updating cluster {Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:32:10.292167  587312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:32:10.292234  587312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:10.328150  587312 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:10.328175  587312 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:32:10.328229  587312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:10.353479  587312 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:10.353503  587312 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:32:10.353513  587312 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 10:32:10.353614  587312 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-800763 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:32:10.353694  587312 ssh_runner.go:195] Run: crio config
	I1115 10:32:10.405394  587312 cni.go:84] Creating CNI manager for ""
	I1115 10:32:10.405464  587312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:10.405506  587312 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:32:10.405560  587312 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-800763 NodeName:addons-800763 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:32:10.405735  587312 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-800763"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:32:10.405849  587312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:32:10.413524  587312 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:32:10.413654  587312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:32:10.421379  587312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 10:32:10.434189  587312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:32:10.448162  587312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1115 10:32:10.461247  587312 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:32:10.464788  587312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:10.475029  587312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:10.589862  587312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:10.605029  587312 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763 for IP: 192.168.49.2
	I1115 10:32:10.605100  587312 certs.go:195] generating shared ca certs ...
	I1115 10:32:10.605131  587312 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:10.605332  587312 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:32:10.891595  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt ...
	I1115 10:32:10.891630  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt: {Name:mkd2d964bbd950f2151022277ba6c34aa6bbfb67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:10.891862  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key ...
	I1115 10:32:10.891879  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key: {Name:mkd189b08acbe67e485f91570547219e89ff9e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:10.891997  587312 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:32:11.558329  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt ...
	I1115 10:32:11.558359  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt: {Name:mkfe97a846764f12a527e0da6693f346b8237e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.558555  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key ...
	I1115 10:32:11.558568  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key: {Name:mkc9b5bc0b27eb79bdc3d49b17edb212aca78dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.558646  587312 certs.go:257] generating profile certs ...
	I1115 10:32:11.558713  587312 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.key
	I1115 10:32:11.558734  587312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt with IP's: []
	I1115 10:32:11.738451  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt ...
	I1115 10:32:11.738482  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: {Name:mke7a97fa7c7255f436f93e6c3f21e4dc04c89c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.739278  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.key ...
	I1115 10:32:11.739293  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.key: {Name:mk971d630f8c84d7694f608589463b38a379183d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.739384  587312 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd
	I1115 10:32:11.739402  587312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 10:32:12.319488  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd ...
	I1115 10:32:12.319520  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd: {Name:mk41f7f6e914662c8f5046a8f6123933c0255630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:12.319711  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd ...
	I1115 10:32:12.319725  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd: {Name:mkd6899b5d4e1dedc3a61fb0284b00ab5964bec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:12.320382  587312 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt
	I1115 10:32:12.320465  587312 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key
	I1115 10:32:12.320520  587312 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key
	I1115 10:32:12.320540  587312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt with IP's: []
	I1115 10:32:13.281751  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt ...
	I1115 10:32:13.281793  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt: {Name:mkddd4d1dd755fec304ec78625965f894e655302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:13.282645  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key ...
	I1115 10:32:13.282667  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key: {Name:mkdb6382940922bea3ca5f2f8ef722e65a3541c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:13.282980  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:32:13.283026  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:32:13.283058  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:32:13.283088  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:32:13.283768  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:32:13.303217  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:32:13.323151  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:32:13.342730  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:32:13.361520  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:32:13.379565  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:32:13.396357  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:32:13.413477  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:32:13.430916  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:32:13.448913  587312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:32:13.462006  587312 ssh_runner.go:195] Run: openssl version
	I1115 10:32:13.468068  587312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:32:13.476511  587312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:13.480376  587312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:13.480460  587312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:13.522348  587312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:32:13.530682  587312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:32:13.534176  587312 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:32:13.534234  587312 kubeadm.go:401] StartCluster: {Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:13.534310  587312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:32:13.534370  587312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:32:13.561007  587312 cri.go:89] found id: ""
	I1115 10:32:13.561094  587312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:32:13.569128  587312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:32:13.576749  587312 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:32:13.576847  587312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:32:13.584793  587312 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:32:13.584812  587312 kubeadm.go:158] found existing configuration files:
	
	I1115 10:32:13.584875  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:32:13.592625  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:32:13.592739  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:32:13.600125  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:32:13.607665  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:32:13.607763  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:32:13.614739  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:32:13.622438  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:32:13.622555  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:32:13.629989  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:32:13.637804  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:32:13.637917  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:32:13.645085  587312 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:32:13.688140  587312 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:32:13.688504  587312 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:32:13.719238  587312 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:32:13.719343  587312 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:32:13.719399  587312 kubeadm.go:319] OS: Linux
	I1115 10:32:13.719473  587312 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:32:13.719549  587312 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:32:13.719621  587312 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:32:13.719693  587312 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:32:13.719766  587312 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:32:13.719835  587312 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:32:13.719903  587312 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:32:13.719973  587312 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:32:13.720041  587312 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:32:13.787866  587312 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:32:13.788037  587312 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:32:13.788160  587312 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:32:13.797380  587312 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:32:13.803545  587312 out.go:252]   - Generating certificates and keys ...
	I1115 10:32:13.803671  587312 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:32:13.803770  587312 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:32:15.821236  587312 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:32:16.076556  587312 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:32:16.738380  587312 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:32:17.441798  587312 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:32:17.805343  587312 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:32:17.805558  587312 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-800763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:32:18.060448  587312 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:32:18.060669  587312 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-800763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:32:19.321035  587312 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:32:19.953239  587312 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:32:20.465654  587312 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:32:20.465942  587312 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:32:20.894375  587312 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:32:21.108982  587312 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:32:21.580233  587312 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:32:22.441181  587312 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:32:22.753220  587312 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:32:22.754076  587312 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:32:22.757000  587312 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:32:22.760548  587312 out.go:252]   - Booting up control plane ...
	I1115 10:32:22.760666  587312 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:32:22.760757  587312 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:32:22.760835  587312 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:32:22.775423  587312 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:32:22.775780  587312 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:32:22.783695  587312 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:32:22.784665  587312 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:32:22.785080  587312 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:32:22.916632  587312 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:32:22.916760  587312 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:32:24.417347  587312 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500840447s
	I1115 10:32:24.420980  587312 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:32:24.421101  587312 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 10:32:24.421410  587312 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:32:24.421505  587312 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:32:28.532687  587312 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.110964158s
	I1115 10:32:29.383077  587312 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.962056799s
	I1115 10:32:30.922577  587312 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501473576s
	I1115 10:32:30.941926  587312 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:32:30.961614  587312 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:32:30.977197  587312 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:32:30.977610  587312 kubeadm.go:319] [mark-control-plane] Marking the node addons-800763 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:32:30.990175  587312 kubeadm.go:319] [bootstrap-token] Using token: nlvole.sy74anm863filc3q
	I1115 10:32:30.995090  587312 out.go:252]   - Configuring RBAC rules ...
	I1115 10:32:30.995226  587312 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:32:30.999399  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:32:31.008355  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:32:31.015191  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:32:31.020095  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:32:31.025576  587312 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:32:31.330611  587312 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:32:31.787957  587312 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:32:32.329472  587312 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:32:32.330833  587312 kubeadm.go:319] 
	I1115 10:32:32.330914  587312 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:32:32.330923  587312 kubeadm.go:319] 
	I1115 10:32:32.331003  587312 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:32:32.331008  587312 kubeadm.go:319] 
	I1115 10:32:32.331035  587312 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:32:32.331098  587312 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:32:32.331151  587312 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:32:32.331155  587312 kubeadm.go:319] 
	I1115 10:32:32.331212  587312 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:32:32.331216  587312 kubeadm.go:319] 
	I1115 10:32:32.331266  587312 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:32:32.331270  587312 kubeadm.go:319] 
	I1115 10:32:32.331325  587312 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:32:32.331404  587312 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:32:32.331475  587312 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:32:32.331480  587312 kubeadm.go:319] 
	I1115 10:32:32.331569  587312 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:32:32.331649  587312 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:32:32.331654  587312 kubeadm.go:319] 
	I1115 10:32:32.331743  587312 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nlvole.sy74anm863filc3q \
	I1115 10:32:32.331851  587312 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 10:32:32.331873  587312 kubeadm.go:319] 	--control-plane 
	I1115 10:32:32.331895  587312 kubeadm.go:319] 
	I1115 10:32:32.331985  587312 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:32:32.331989  587312 kubeadm.go:319] 
	I1115 10:32:32.332075  587312 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nlvole.sy74anm863filc3q \
	I1115 10:32:32.332183  587312 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 10:32:32.334745  587312 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:32:32.335001  587312 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:32:32.335123  587312 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:32:32.335161  587312 cni.go:84] Creating CNI manager for ""
	I1115 10:32:32.335178  587312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:32.338388  587312 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:32:32.341339  587312 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:32:32.345403  587312 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:32:32.345423  587312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:32:32.358251  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:32:32.643532  587312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:32:32.643739  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:32.643866  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-800763 minikube.k8s.io/updated_at=2025_11_15T10_32_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=addons-800763 minikube.k8s.io/primary=true
	I1115 10:32:32.659471  587312 ops.go:34] apiserver oom_adj: -16
	I1115 10:32:32.759116  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:33.259931  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:33.759788  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:34.259952  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:34.759221  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:35.259195  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:35.759883  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:36.260029  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:36.759987  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:37.260075  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:37.393123  587312 kubeadm.go:1114] duration metric: took 4.749432812s to wait for elevateKubeSystemPrivileges
	I1115 10:32:37.393178  587312 kubeadm.go:403] duration metric: took 23.858946632s to StartCluster
	I1115 10:32:37.393196  587312 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:37.393365  587312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:32:37.393847  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:37.394109  587312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:32:37.394210  587312 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:32:37.394430  587312 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:37.394546  587312 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 10:32:37.394636  587312 addons.go:70] Setting yakd=true in profile "addons-800763"
	I1115 10:32:37.394656  587312 addons.go:239] Setting addon yakd=true in "addons-800763"
	I1115 10:32:37.394681  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.395141  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.395763  587312 addons.go:70] Setting inspektor-gadget=true in profile "addons-800763"
	I1115 10:32:37.395784  587312 addons.go:239] Setting addon inspektor-gadget=true in "addons-800763"
	I1115 10:32:37.395809  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.396223  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.396682  587312 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-800763"
	I1115 10:32:37.396703  587312 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-800763"
	I1115 10:32:37.396727  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.397143  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.400935  587312 addons.go:70] Setting cloud-spanner=true in profile "addons-800763"
	I1115 10:32:37.400982  587312 addons.go:239] Setting addon cloud-spanner=true in "addons-800763"
	I1115 10:32:37.401016  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.401441  587312 addons.go:70] Setting metrics-server=true in profile "addons-800763"
	I1115 10:32:37.401955  587312 addons.go:239] Setting addon metrics-server=true in "addons-800763"
	I1115 10:32:37.401489  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.404777  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.401496  587312 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-800763"
	I1115 10:32:37.405290  587312 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-800763"
	I1115 10:32:37.405313  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.405762  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.401550  587312 addons.go:70] Setting default-storageclass=true in profile "addons-800763"
	I1115 10:32:37.418152  587312 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-800763"
	I1115 10:32:37.401554  587312 addons.go:70] Setting gcp-auth=true in profile "addons-800763"
	I1115 10:32:37.418896  587312 mustload.go:66] Loading cluster: addons-800763
	I1115 10:32:37.401557  587312 addons.go:70] Setting ingress=true in profile "addons-800763"
	I1115 10:32:37.421792  587312 addons.go:239] Setting addon ingress=true in "addons-800763"
	I1115 10:32:37.421873  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.422371  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.422972  587312 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:37.423292  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.401561  587312 addons.go:70] Setting ingress-dns=true in profile "addons-800763"
	I1115 10:32:37.401593  587312 out.go:179] * Verifying Kubernetes components...
	I1115 10:32:37.401600  587312 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-800763"
	I1115 10:32:37.401612  587312 addons.go:70] Setting registry=true in profile "addons-800763"
	I1115 10:32:37.401619  587312 addons.go:70] Setting registry-creds=true in profile "addons-800763"
	I1115 10:32:37.401624  587312 addons.go:70] Setting storage-provisioner=true in profile "addons-800763"
	I1115 10:32:37.401634  587312 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-800763"
	I1115 10:32:37.401645  587312 addons.go:70] Setting volcano=true in profile "addons-800763"
	I1115 10:32:37.401651  587312 addons.go:70] Setting volumesnapshots=true in profile "addons-800763"
	I1115 10:32:37.421380  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.421629  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.456283  587312 addons.go:239] Setting addon ingress-dns=true in "addons-800763"
	I1115 10:32:37.456409  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.456946  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.457211  587312 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-800763"
	I1115 10:32:37.461317  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.484636  587312 addons.go:239] Setting addon volcano=true in "addons-800763"
	I1115 10:32:37.484733  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.485249  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.500455  587312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:37.500733  587312 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-800763"
	I1115 10:32:37.500795  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.501365  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.519484  587312 addons.go:239] Setting addon volumesnapshots=true in "addons-800763"
	I1115 10:32:37.519590  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.520192  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.538847  587312 addons.go:239] Setting addon registry=true in "addons-800763"
	I1115 10:32:37.539255  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.539755  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.566426  587312 addons.go:239] Setting addon registry-creds=true in "addons-800763"
	I1115 10:32:37.566554  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.573178  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.608971  587312 addons.go:239] Setting addon storage-provisioner=true in "addons-800763"
	I1115 10:32:37.609070  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.609576  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.618249  587312 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 10:32:37.622487  587312 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 10:32:37.622517  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 10:32:37.622582  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.634442  587312 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 10:32:37.640597  587312 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 10:32:37.640625  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 10:32:37.640693  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.664476  587312 addons.go:239] Setting addon default-storageclass=true in "addons-800763"
	I1115 10:32:37.664516  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.666689  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.667002  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.668347  587312 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 10:32:37.695606  587312 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 10:32:37.705584  587312 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 10:32:37.705648  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 10:32:37.705732  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.711868  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 10:32:37.715675  587312 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-800763"
	I1115 10:32:37.715716  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.716118  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.720072  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 10:32:37.724200  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 10:32:37.724225  587312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 10:32:37.724285  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.746905  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 10:32:37.751243  587312 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 10:32:37.757931  587312 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 10:32:37.757955  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 10:32:37.758026  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	W1115 10:32:37.765804  587312 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 10:32:37.794184  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 10:32:37.796074  587312 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 10:32:37.820041  587312 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 10:32:37.820106  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 10:32:37.820223  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.846928  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 10:32:37.853265  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 10:32:37.881966  587312 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 10:32:37.885483  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 10:32:37.885548  587312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 10:32:37.885644  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.885675  587312 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 10:32:37.894808  587312 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 10:32:37.894886  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 10:32:37.894978  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.885786  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.885960  587312 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 10:32:37.916202  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 10:32:37.916280  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.885966  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 10:32:37.885986  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.886048  587312 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:37.917521  587312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:32:37.917581  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.928991  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 10:32:37.929015  587312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 10:32:37.929087  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.941235  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 10:32:37.947717  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 10:32:37.952989  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 10:32:37.956012  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 10:32:37.957872  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.958198  587312 out.go:179]   - Using image docker.io/registry:3.0.0
	I1115 10:32:37.958748  587312 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:32:37.959415  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.965288  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 10:32:37.965321  587312 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 10:32:37.965534  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.968221  587312 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 10:32:37.965607  587312 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:37.966517  587312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:32:37.969697  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:32:37.969767  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.970346  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 10:32:37.970362  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 10:32:37.970414  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.981421  587312 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 10:32:37.981448  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 10:32:37.981510  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:38.002465  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.005830  587312 out.go:179]   - Using image docker.io/busybox:stable
	I1115 10:32:38.009746  587312 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 10:32:38.009776  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 10:32:38.009854  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:38.071450  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.085270  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.090511  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.129153  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.140126  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.152088  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.158603  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.165482  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.166365  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	W1115 10:32:38.168420  587312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 10:32:38.168460  587312 retry.go:31] will retry after 144.631039ms: ssh: handshake failed: EOF
	I1115 10:32:38.273510  587312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:38.594283  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 10:32:38.594351  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 10:32:38.635470  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:38.672203  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 10:32:38.672273  587312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 10:32:38.683919  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:38.797699  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 10:32:38.808618  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 10:32:38.808694  587312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 10:32:38.816292  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 10:32:38.832851  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 10:32:38.836071  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 10:32:38.838625  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 10:32:38.878464  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 10:32:38.880834  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 10:32:38.880894  587312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 10:32:38.917565  587312 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 10:32:38.917640  587312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 10:32:38.945607  587312 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 10:32:38.945681  587312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 10:32:38.980419  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 10:32:38.980493  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 10:32:39.105169  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 10:32:39.106766  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 10:32:39.106835  587312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 10:32:39.109054  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 10:32:39.114348  587312 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 10:32:39.114426  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 10:32:39.146366  587312 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 10:32:39.146440  587312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 10:32:39.164219  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 10:32:39.184092  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 10:32:39.184168  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 10:32:39.241349  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 10:32:39.241433  587312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 10:32:39.292463  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 10:32:39.292538  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 10:32:39.317893  587312 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 10:32:39.317972  587312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 10:32:39.318460  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 10:32:39.446237  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 10:32:39.446313  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 10:32:39.451395  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 10:32:39.451472  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 10:32:39.476629  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 10:32:39.476708  587312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 10:32:39.590961  587312 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 10:32:39.591033  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 10:32:39.705578  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 10:32:39.705601  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 10:32:39.725668  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 10:32:39.752575  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 10:32:39.889049  587312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.919524781s)
	I1115 10:32:39.889130  587312 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 10:32:39.890272  587312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.616733924s)
	I1115 10:32:39.891216  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255720846s)
	I1115 10:32:39.891168  587312 node_ready.go:35] waiting up to 6m0s for node "addons-800763" to be "Ready" ...
	I1115 10:32:39.984619  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 10:32:39.984693  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 10:32:40.177026  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 10:32:40.177055  587312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 10:32:40.398540  587312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-800763" context rescaled to 1 replicas
	I1115 10:32:40.451207  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 10:32:40.451233  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 10:32:40.700076  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 10:32:40.700100  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 10:32:40.944179  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 10:32:40.944206  587312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 10:32:41.111216  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1115 10:32:41.910389  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:42.150163  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.466155631s)
	I1115 10:32:42.890475  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.092734485s)
	I1115 10:32:42.890536  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.074181992s)
	I1115 10:32:43.550016  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.717010247s)
	I1115 10:32:43.550050  587312 addons.go:480] Verifying addon ingress=true in "addons-800763"
	I1115 10:32:43.550210  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.714117536s)
	I1115 10:32:43.550276  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.711632698s)
	I1115 10:32:43.550311  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.671827092s)
	I1115 10:32:43.550348  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.445119592s)
	I1115 10:32:43.550396  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.441280046s)
	I1115 10:32:43.550535  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.386243169s)
	I1115 10:32:43.550550  587312 addons.go:480] Verifying addon metrics-server=true in "addons-800763"
	I1115 10:32:43.550583  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.232072014s)
	I1115 10:32:43.550595  587312 addons.go:480] Verifying addon registry=true in "addons-800763"
	I1115 10:32:43.550793  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.825058516s)
	I1115 10:32:43.554410  587312 out.go:179] * Verifying registry addon...
	I1115 10:32:43.554489  587312 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-800763 service yakd-dashboard -n yakd-dashboard
	
	I1115 10:32:43.554517  587312 out.go:179] * Verifying ingress addon...
	I1115 10:32:43.558852  587312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 10:32:43.559699  587312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 10:32:43.573016  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.820347961s)
	W1115 10:32:43.573063  587312 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 10:32:43.573083  587312 retry.go:31] will retry after 251.771026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 10:32:43.575048  587312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 10:32:43.575072  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:43.576432  587312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 10:32:43.576455  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:43.825723  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 10:32:44.042817  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.931553462s)
	I1115 10:32:44.042853  587312 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-800763"
	I1115 10:32:44.046009  587312 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 10:32:44.049601  587312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 10:32:44.073115  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:44.073257  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:44.074038  587312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 10:32:44.074062  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 10:32:44.395308  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:44.553237  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:44.561862  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:44.563385  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:45.055457  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:45.071447  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:45.071826  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:45.339020  587312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 10:32:45.339298  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:45.358510  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:45.470241  587312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 10:32:45.483856  587312 addons.go:239] Setting addon gcp-auth=true in "addons-800763"
	I1115 10:32:45.483906  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:45.484358  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:45.501659  587312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 10:32:45.501715  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:45.521164  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:45.553568  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:45.563049  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:45.563573  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:46.052604  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:46.062576  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:46.063662  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:46.553684  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:46.563892  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:46.565091  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:46.657461  587312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.155768557s)
	I1115 10:32:46.657712  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.831654278s)
	I1115 10:32:46.660916  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 10:32:46.663956  587312 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 10:32:46.666913  587312 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 10:32:46.666938  587312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 10:32:46.680447  587312 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 10:32:46.680514  587312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 10:32:46.693887  587312 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 10:32:46.693909  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 10:32:46.707473  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1115 10:32:46.895216  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:47.055022  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:47.139818  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:47.140492  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:47.217773  587312 addons.go:480] Verifying addon gcp-auth=true in "addons-800763"
	I1115 10:32:47.220814  587312 out.go:179] * Verifying gcp-auth addon...
	I1115 10:32:47.224525  587312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 10:32:47.229776  587312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 10:32:47.229841  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:47.553719  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:47.563082  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:47.563217  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:47.728332  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:48.055913  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:48.062880  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:48.063263  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:48.228255  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:48.552882  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:48.561730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:48.563612  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:48.727688  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:49.053381  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:49.062858  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:49.063243  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:49.228384  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:49.394148  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:49.552976  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:49.561891  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:49.562829  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:49.727439  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:50.053714  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:50.063046  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:50.063126  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:50.228798  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:50.552542  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:50.562595  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:50.562764  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:50.727687  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:51.052995  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:51.061973  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:51.065366  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:51.227464  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:51.394511  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:51.553553  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:51.563063  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:51.563147  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:51.728236  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:52.053334  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:52.062305  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:52.062504  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:52.228685  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:52.552965  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:52.561616  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:52.563293  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:52.728396  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:53.053393  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:53.062402  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:53.063458  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:53.227571  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:53.394791  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:53.552803  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:53.562527  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:53.562586  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:53.727406  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:54.053748  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:54.062651  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:54.062998  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:54.227815  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:54.553359  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:54.561815  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:54.562830  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:54.729193  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:55.053856  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:55.062510  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:55.064237  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:55.227946  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:55.552655  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:55.562695  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:55.563139  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:55.727981  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:55.895531  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:56.053688  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:56.062825  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:56.063051  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:56.228045  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:56.552726  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:56.564256  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:56.564349  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:56.728699  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:57.053714  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:57.062382  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:57.062681  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:57.227327  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:57.553183  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:57.562103  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:57.563293  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:57.728317  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:58.053016  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:58.062362  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:58.062928  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:58.227735  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:58.394442  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:58.553253  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:58.562211  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:58.563487  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:58.727650  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:59.053087  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:59.061734  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:59.063008  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:59.227620  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:59.553067  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:59.561279  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:59.562619  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:59.728677  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:00.110953  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:00.112774  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:00.134869  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:00.234331  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:00.395931  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:00.552978  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:00.563389  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:00.563799  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:00.727454  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:01.053172  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:01.062150  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:01.063327  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:01.228450  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:01.553638  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:01.563030  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:01.563094  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:01.728305  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:02.053537  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:02.062645  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:02.062723  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:02.227986  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:02.554457  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:02.563244  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:02.563396  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:02.727473  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:02.894606  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:03.053086  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:03.063187  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:03.063611  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:03.227647  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:03.553868  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:03.561758  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:03.563769  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:03.727463  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:04.052740  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:04.063171  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:04.063387  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:04.227948  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:04.552656  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:04.562801  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:04.563002  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:04.728319  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:05.053806  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:05.062812  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:05.063211  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:05.228257  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:05.395069  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:05.552935  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:05.562895  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:05.562978  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:05.728182  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:06.053624  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:06.062163  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:06.063199  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:06.230684  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:06.552355  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:06.562531  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:06.562823  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:06.728112  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:07.053319  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:07.062805  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:07.063151  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:07.228102  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:07.395437  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:07.552452  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:07.562138  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:07.563322  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:07.727483  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:08.053576  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:08.062918  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:08.063258  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:08.228257  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:08.552897  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:08.562748  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:08.562946  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:08.727663  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:09.053516  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:09.062211  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:09.063185  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:09.227942  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:09.552662  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:09.564049  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:09.564775  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:09.727714  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:09.894884  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:10.053474  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:10.062983  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:10.063179  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:10.227878  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:10.553170  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:10.562141  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:10.564273  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:10.728202  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:11.053809  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:11.062548  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:11.062752  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:11.227567  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:11.553653  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:11.562931  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:11.563101  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:11.728664  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:12.053453  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:12.063090  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:12.063358  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:12.228192  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:12.394184  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:12.553444  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:12.562510  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:12.562645  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:12.727542  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:13.053317  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:13.062264  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:13.063818  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:13.227836  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:13.557173  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:13.562516  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:13.563332  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:13.728522  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:14.053901  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:14.062241  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:14.062853  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:14.227907  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:14.395633  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:14.552385  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:14.562687  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:14.562965  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:14.727664  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:15.055241  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:15.063398  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:15.063472  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:15.228368  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:15.553351  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:15.562514  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:15.562250  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:15.727476  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:16.053608  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:16.063399  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:16.063880  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:16.228024  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:16.552739  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:16.562841  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:16.562989  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:16.727975  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:16.894801  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:17.053076  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:17.062943  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:17.063122  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:17.227719  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:17.552698  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:17.562896  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:17.563021  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:17.727828  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:18.053460  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:18.062878  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:18.063051  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:18.227973  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:18.552773  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:18.563310  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:18.563577  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:18.728322  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:18.914234  587312 node_ready.go:49] node "addons-800763" is "Ready"
	I1115 10:33:18.914266  587312 node_ready.go:38] duration metric: took 39.022826814s for node "addons-800763" to be "Ready" ...
	I1115 10:33:18.914281  587312 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:33:18.914360  587312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:33:18.931291  587312 api_server.go:72] duration metric: took 41.537046599s to wait for apiserver process to appear ...
	I1115 10:33:18.931312  587312 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:33:18.931331  587312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 10:33:18.953315  587312 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 10:33:18.955607  587312 api_server.go:141] control plane version: v1.34.1
	I1115 10:33:18.955639  587312 api_server.go:131] duration metric: took 24.31901ms to wait for apiserver health ...
	I1115 10:33:18.955649  587312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:33:18.971288  587312 system_pods.go:59] 19 kube-system pods found
	I1115 10:33:18.971325  587312 system_pods.go:61] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending
	I1115 10:33:18.971332  587312 system_pods.go:61] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:18.971337  587312 system_pods.go:61] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:18.971405  587312 system_pods.go:61] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending
	I1115 10:33:18.971417  587312 system_pods.go:61] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:18.971422  587312 system_pods.go:61] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:18.971442  587312 system_pods.go:61] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:18.971453  587312 system_pods.go:61] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:18.971470  587312 system_pods.go:61] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending
	I1115 10:33:18.971483  587312 system_pods.go:61] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:18.971490  587312 system_pods.go:61] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:18.971499  587312 system_pods.go:61] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:18.971511  587312 system_pods.go:61] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending
	I1115 10:33:18.971519  587312 system_pods.go:61] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending
	I1115 10:33:18.971527  587312 system_pods.go:61] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending
	I1115 10:33:18.971536  587312 system_pods.go:61] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending
	I1115 10:33:18.971567  587312 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending
	I1115 10:33:18.971573  587312 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending
	I1115 10:33:18.971578  587312 system_pods.go:61] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending
	I1115 10:33:18.971597  587312 system_pods.go:74] duration metric: took 15.941548ms to wait for pod list to return data ...
	I1115 10:33:18.971614  587312 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:33:18.985943  587312 default_sa.go:45] found service account: "default"
	I1115 10:33:18.986020  587312 default_sa.go:55] duration metric: took 14.397777ms for default service account to be created ...
	I1115 10:33:18.986044  587312 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:33:18.998325  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:18.998404  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending
	I1115 10:33:18.998425  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:18.998444  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:18.998477  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending
	I1115 10:33:18.998500  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:18.998520  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:18.998539  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:18.998574  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:18.998592  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending
	I1115 10:33:18.998612  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:18.998645  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:18.998672  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:18.998693  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending
	I1115 10:33:18.998727  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending
	I1115 10:33:18.998753  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:18.998771  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending
	I1115 10:33:18.998789  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending
	I1115 10:33:18.998823  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending
	I1115 10:33:18.998840  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending
	I1115 10:33:18.998882  587312 retry.go:31] will retry after 287.447039ms: missing components: kube-dns
	I1115 10:33:19.072981  587312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 10:33:19.073053  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:19.073666  587312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 10:33:19.073726  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:19.073828  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:19.291034  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:19.298229  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:19.298311  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:19.298334  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:19.298354  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:19.298386  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending
	I1115 10:33:19.298410  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:19.298429  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:19.298464  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:19.298488  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:19.298506  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending
	I1115 10:33:19.298525  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:19.298556  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:19.298582  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:19.298600  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending
	I1115 10:33:19.298637  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending
	I1115 10:33:19.298663  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:19.298682  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending
	I1115 10:33:19.298721  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.298748  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.298768  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending
	I1115 10:33:19.298812  587312 retry.go:31] will retry after 246.929858ms: missing components: kube-dns
	I1115 10:33:19.566013  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:19.566050  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:19.566058  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:19.566063  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:19.566071  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 10:33:19.566076  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:19.566081  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:19.566086  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:19.566091  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:19.566101  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 10:33:19.566105  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:19.566113  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:19.566118  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:19.566132  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 10:33:19.566138  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 10:33:19.566145  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:19.566155  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 10:33:19.566167  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.566178  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.566185  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:19.566199  587312 retry.go:31] will retry after 423.660097ms: missing components: kube-dns
	I1115 10:33:19.568262  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:19.573692  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:19.574272  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:19.731048  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:19.996506  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:19.996544  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:19.996553  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 10:33:19.996561  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 10:33:19.996568  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 10:33:19.996572  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:19.996578  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:19.996586  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:19.996590  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:19.996599  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 10:33:19.996603  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:19.996614  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:19.996623  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:19.996636  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 10:33:19.996642  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 10:33:19.996652  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:19.996658  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 10:33:19.996664  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.996671  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.996679  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:19.996698  587312 retry.go:31] will retry after 464.672682ms: missing components: kube-dns
	I1115 10:33:20.059901  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:20.102602  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:20.102787  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:20.227999  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:20.466503  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:20.466539  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Running
	I1115 10:33:20.466548  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 10:33:20.466555  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 10:33:20.466564  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 10:33:20.466571  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:20.466576  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:20.466581  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:20.466585  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:20.466591  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 10:33:20.466598  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:20.466603  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:20.466609  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:20.466622  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 10:33:20.466628  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 10:33:20.466640  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:20.466647  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 10:33:20.466660  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:20.466667  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:20.466671  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Running
	I1115 10:33:20.466683  587312 system_pods.go:126] duration metric: took 1.480584372s to wait for k8s-apps to be running ...
	I1115 10:33:20.466694  587312 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:33:20.466751  587312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:20.484125  587312 system_svc.go:56] duration metric: took 17.420976ms WaitForService to wait for kubelet
	I1115 10:33:20.484156  587312 kubeadm.go:587] duration metric: took 43.089915241s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:33:20.484185  587312 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:33:20.487211  587312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:33:20.487246  587312 node_conditions.go:123] node cpu capacity is 2
	I1115 10:33:20.487261  587312 node_conditions.go:105] duration metric: took 3.065514ms to run NodePressure ...
	I1115 10:33:20.487274  587312 start.go:242] waiting for startup goroutines ...
	I1115 10:33:20.553326  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:20.563462  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:20.563640  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:20.728121  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:21.054231  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:21.062803  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:21.064645  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:21.227819  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:21.553464  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:21.562451  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:21.563152  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:21.731530  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:22.054028  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:22.063209  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:22.065344  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:22.228149  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:22.554517  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:22.564318  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:22.564890  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:22.757900  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:23.054156  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:23.061781  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:23.063456  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:23.228547  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:23.552982  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:23.562058  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:23.564442  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:23.727997  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:24.053572  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:24.062114  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:24.064587  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:24.228216  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:24.554170  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:24.564277  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:24.564732  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:24.728097  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:25.053674  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:25.064454  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:25.064908  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:25.228309  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:25.553219  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:25.564232  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:25.564658  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:25.727696  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:26.054101  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:26.063241  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:26.063410  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:26.228568  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:26.552821  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:26.562616  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:26.563837  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:26.728100  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:27.053674  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:27.064612  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:27.065017  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:27.235883  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:27.553775  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:27.561531  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:27.564631  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:27.727598  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:28.052973  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:28.062359  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:28.063673  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:28.227910  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:28.553785  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:28.566994  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:28.567383  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:28.728691  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:29.054938  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:29.069996  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:29.070507  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:29.228176  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:29.555358  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:29.566542  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:29.567075  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:29.731425  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:30.099731  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:30.099891  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:30.101842  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:30.230401  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:30.552952  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:30.562763  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:30.564782  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:30.727802  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:31.057088  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:31.157803  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:31.158222  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:31.266563  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:31.554464  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:31.563127  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:31.563207  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:31.728423  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:32.053686  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:32.066021  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:32.066676  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:32.227694  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:32.554284  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:32.563966  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:32.564170  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:32.728341  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:33.053478  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:33.063641  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:33.064357  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:33.230356  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:33.552823  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:33.562808  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:33.565111  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:33.728311  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:34.053671  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:34.063581  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:34.064201  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:34.231590  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:34.553384  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:34.563171  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:34.563311  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:34.730914  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:35.054078  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:35.064195  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:35.064608  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:35.229140  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:35.554036  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:35.564518  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:35.564952  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:35.728212  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:36.052461  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:36.063867  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:36.063972  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:36.228312  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:36.552952  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:36.561817  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:36.564246  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:36.728473  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:37.054174  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:37.065043  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:37.067791  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:37.239766  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:37.554262  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:37.564464  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:37.565066  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:37.728411  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:38.053823  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:38.065271  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:38.065692  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:38.302307  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:38.553559  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:38.563269  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:38.564339  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:38.728289  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:39.053608  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:39.062926  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:39.064212  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:39.227988  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:39.553806  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:39.561956  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:39.563681  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:39.730196  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:40.057049  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:40.073559  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:40.074044  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:40.235027  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:40.554292  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:40.564587  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:40.565046  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:40.728605  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:41.053571  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:41.062932  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:41.063284  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:41.229955  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:41.554364  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:41.564339  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:41.564661  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:41.727689  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:42.053609  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:42.065007  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:42.065919  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:42.228615  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:42.554190  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:42.564321  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:42.564748  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:42.727684  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:43.053493  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:43.063495  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:43.064830  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:43.228045  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:43.553953  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:43.563313  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:43.563772  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:43.727654  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:44.053442  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:44.064059  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:44.064596  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:44.227457  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:44.553644  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:44.563761  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:44.564073  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:44.728811  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:45.066911  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:45.082737  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:45.116725  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:45.243730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:45.553667  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:45.563033  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:45.564242  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:45.728583  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:46.053958  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:46.063237  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:46.063517  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:46.228162  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:46.553852  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:46.561611  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:46.563415  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:46.730507  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:47.053871  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:47.062796  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:47.062901  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:47.228232  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:47.553929  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:47.563459  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:47.563887  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:47.728114  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:48.054139  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:48.064255  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:48.064597  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:48.227615  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:48.553871  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:48.563334  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:48.563513  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:48.728631  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:49.053013  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:49.062009  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:49.063664  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:49.227844  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:49.554439  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:49.563806  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:49.564249  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:49.727635  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:50.054473  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:50.064040  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:50.067343  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:50.228275  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:50.554212  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:50.564646  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:50.565134  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:50.729703  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:51.053839  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:51.064836  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:51.065351  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:51.231027  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:51.561142  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:51.564312  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:51.565718  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:51.728224  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:52.053693  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:52.063548  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:52.063611  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:52.229980  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:52.554287  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:52.564007  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:52.564265  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:52.728228  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:53.054309  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:53.062820  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:53.063372  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:53.227668  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:53.553221  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:53.561799  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:53.563865  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:53.727873  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:54.053986  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:54.062130  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:54.063373  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:54.227891  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:54.553730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:54.562184  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:54.563876  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:54.727804  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:55.054174  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:55.063455  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:55.063561  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:55.228082  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:55.553642  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:55.562948  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:55.563073  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:55.728495  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:56.053958  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:56.062226  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:56.063662  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:56.228483  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:56.554537  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:56.564085  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:56.564524  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:56.727972  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:57.054211  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:57.063364  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:57.065157  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:57.228479  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:57.553630  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:57.563480  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:57.563672  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:57.727906  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:58.053305  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:58.070371  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:58.076624  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:58.227479  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:58.558570  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:58.563803  587312 kapi.go:107] duration metric: took 1m15.004947568s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 10:33:58.564186  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:58.728413  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:59.053936  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:59.062812  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:59.228058  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:59.553622  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:59.563760  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:59.727930  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:00.057207  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:00.072979  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:00.247986  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:00.554663  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:00.563644  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:00.727695  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:01.053799  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:01.063062  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:01.228835  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:01.553733  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:01.562967  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:01.730741  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:02.053589  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:02.063685  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:02.230105  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:02.553403  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:02.563630  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:02.727862  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:03.054005  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:03.063581  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:03.228048  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:03.554651  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:03.563153  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:03.729206  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:04.054779  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:04.064823  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:04.230163  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:04.554329  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:04.563360  587312 kapi.go:107] duration metric: took 1m21.003658667s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 10:34:04.728953  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:05.054438  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:05.229851  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:05.588691  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:05.728578  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:06.053790  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:06.227541  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:06.553246  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:06.728275  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:07.052907  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:07.228308  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:07.557839  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:07.729298  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:08.053688  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:08.227730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:08.553502  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:08.728059  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:09.056651  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:09.229009  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:09.554374  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:09.728788  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:10.053745  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:10.229221  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:10.554539  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:10.728670  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:11.053511  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:11.228146  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:11.554766  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:11.728590  587312 kapi.go:107] duration metric: took 1m24.504064309s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 10:34:11.732140  587312 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-800763 cluster.
	I1115 10:34:11.735195  587312 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 10:34:11.738255  587312 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 10:34:12.053420  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:12.554091  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:13.054330  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:13.553270  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:14.053402  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:14.553572  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:15.061506  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:15.553360  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:16.053776  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:16.571624  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:17.064309  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:17.564393  587312 kapi.go:107] duration metric: took 1m33.514791144s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 10:34:17.605876  587312 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1115 10:34:17.618650  587312 addons.go:515] duration metric: took 1m40.224083035s for enable addons: enabled=[default-storageclass storage-provisioner inspektor-gadget amd-gpu-device-plugin cloud-spanner ingress-dns nvidia-device-plugin registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1115 10:34:17.618710  587312 start.go:247] waiting for cluster config update ...
	I1115 10:34:17.618734  587312 start.go:256] writing updated cluster config ...
	I1115 10:34:17.620311  587312 ssh_runner.go:195] Run: rm -f paused
	I1115 10:34:17.629425  587312 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:17.635084  587312 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b4lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.640307  587312 pod_ready.go:94] pod "coredns-66bc5c9577-b4lj6" is "Ready"
	I1115 10:34:17.640376  587312 pod_ready.go:86] duration metric: took 5.219024ms for pod "coredns-66bc5c9577-b4lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.642977  587312 pod_ready.go:83] waiting for pod "etcd-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.652700  587312 pod_ready.go:94] pod "etcd-addons-800763" is "Ready"
	I1115 10:34:17.652775  587312 pod_ready.go:86] duration metric: took 9.734245ms for pod "etcd-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.672118  587312 pod_ready.go:83] waiting for pod "kube-apiserver-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.677660  587312 pod_ready.go:94] pod "kube-apiserver-addons-800763" is "Ready"
	I1115 10:34:17.677737  587312 pod_ready.go:86] duration metric: took 5.549949ms for pod "kube-apiserver-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.680811  587312 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.034405  587312 pod_ready.go:94] pod "kube-controller-manager-addons-800763" is "Ready"
	I1115 10:34:18.034491  587312 pod_ready.go:86] duration metric: took 353.580808ms for pod "kube-controller-manager-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.233970  587312 pod_ready.go:83] waiting for pod "kube-proxy-pg4bh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.633299  587312 pod_ready.go:94] pod "kube-proxy-pg4bh" is "Ready"
	I1115 10:34:18.633368  587312 pod_ready.go:86] duration metric: took 399.370522ms for pod "kube-proxy-pg4bh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.834347  587312 pod_ready.go:83] waiting for pod "kube-scheduler-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.233708  587312 pod_ready.go:94] pod "kube-scheduler-addons-800763" is "Ready"
	I1115 10:34:19.233745  587312 pod_ready.go:86] duration metric: took 399.327617ms for pod "kube-scheduler-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.233763  587312 pod_ready.go:40] duration metric: took 1.604307989s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:19.292084  587312 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:34:19.296093  587312 out.go:179] * Done! kubectl is now configured to use "addons-800763" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:37:10 addons-800763 crio[828]: time="2025-11-15T10:37:10.384496866Z" level=info msg="Removed container 9c1cf5bdbdac59c5080122885cf10a7734c3806ce7f2449feb8b8c88cae00ae9: kube-system/registry-creds-764b6fb674-66shb/registry-creds" id=5a200570-8696-4ddd-8d0c-9ccc627ccfee name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.707497967Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-2nrrp/POD" id=f9ee7cfe-615e-4f3c-bcb4-e74b297c9399 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.70758203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.716758261Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2nrrp Namespace:default ID:a36287d9bc30960a44ba9e7396b44720371b4bf20ba5feb97e171acf41d862c7 UID:99695bd9-35e3-4829-9a4d-467d88af55a6 NetNS:/var/run/netns/e8976aab-436f-4b44-a4fe-70aa048e1421 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001482b58}] Aliases:map[]}"
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.720532738Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-2nrrp to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.756088521Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2nrrp Namespace:default ID:a36287d9bc30960a44ba9e7396b44720371b4bf20ba5feb97e171acf41d862c7 UID:99695bd9-35e3-4829-9a4d-467d88af55a6 NetNS:/var/run/netns/e8976aab-436f-4b44-a4fe-70aa048e1421 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001482b58}] Aliases:map[]}"
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.756447302Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-2nrrp for CNI network kindnet (type=ptp)"
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.76562963Z" level=info msg="Ran pod sandbox a36287d9bc30960a44ba9e7396b44720371b4bf20ba5feb97e171acf41d862c7 with infra container: default/hello-world-app-5d498dc89-2nrrp/POD" id=f9ee7cfe-615e-4f3c-bcb4-e74b297c9399 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.77653997Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=964a1b46-f75e-4425-999c-2cd6fa793d48 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.776669965Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=964a1b46-f75e-4425-999c-2cd6fa793d48 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.77670912Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=964a1b46-f75e-4425-999c-2cd6fa793d48 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.777964405Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=d51abf2b-a717-4894-8904-7e7e9a8677ba name=/runtime.v1.ImageService/PullImage
	Nov 15 10:37:19 addons-800763 crio[828]: time="2025-11-15T10:37:19.781151446Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.405807437Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=d51abf2b-a717-4894-8904-7e7e9a8677ba name=/runtime.v1.ImageService/PullImage
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.406352099Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ff2b374b-c010-4327-b1d9-bd11a5336df6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.410037542Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=69af89d8-b452-4330-a79b-a1e715f4bbaf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.418076717Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-2nrrp/hello-world-app" id=09d508a6-d265-4b54-a279-0165a6ede468 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.418361923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.436443949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.436821947Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/961079176f05bc8862b24e548a7d2d14b2ae8dedff13d160d44bfc099ec45615/merged/etc/passwd: no such file or directory"
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.437025855Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/961079176f05bc8862b24e548a7d2d14b2ae8dedff13d160d44bfc099ec45615/merged/etc/group: no such file or directory"
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.437807912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.473046341Z" level=info msg="Created container 3d4a22af98ad869dcbb890c019ca3928dab49884f8ff3df1a7e49077aa65a84e: default/hello-world-app-5d498dc89-2nrrp/hello-world-app" id=09d508a6-d265-4b54-a279-0165a6ede468 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.474267237Z" level=info msg="Starting container: 3d4a22af98ad869dcbb890c019ca3928dab49884f8ff3df1a7e49077aa65a84e" id=c619be23-f318-4ffa-b8c3-6e2ecfad6222 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:37:20 addons-800763 crio[828]: time="2025-11-15T10:37:20.477350761Z" level=info msg="Started container" PID=7164 containerID=3d4a22af98ad869dcbb890c019ca3928dab49884f8ff3df1a7e49077aa65a84e description=default/hello-world-app-5d498dc89-2nrrp/hello-world-app id=c619be23-f318-4ffa-b8c3-6e2ecfad6222 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a36287d9bc30960a44ba9e7396b44720371b4bf20ba5feb97e171acf41d862c7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	3d4a22af98ad8       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   a36287d9bc309       hello-world-app-5d498dc89-2nrrp            default
	d683294c30f3c       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             11 seconds ago           Exited              registry-creds                           4                   fd77f912a5f86       registry-creds-764b6fb674-66shb            kube-system
	4fa237a5cac8a       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   82bc66cb48cf0       nginx                                      default
	b1cb9e8cb1d13       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   0661af5accbd0       busybox                                    default
	a9042d386fbd2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	58a3d7ae99c21       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	cf788ff5ca9ed       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	b92e7600078c6       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	242745a7e2fe0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	6a7ca91f99713       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   d4ba222fb6799       gcp-auth-78565c9fb4-st9nz                  gcp-auth
	40c1b4b58e2c3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   5855f10464733       gadget-xqsc5                               gadget
	bdf22aea1b665       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   684aa11646346       ingress-nginx-controller-6c8bf45fb-krqbs   ingress-nginx
	eb2c40f0693a3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   5b09fb9d79f70       registry-proxy-frc5j                       kube-system
	b7438622a3867       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	a4dbbc00dd992       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   dc7c732bd16eb       nvidia-device-plugin-daemonset-hc67v       kube-system
	fcf398de5ed70       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   55ee16e3ed6d6       csi-hostpath-attacher-0                    kube-system
	63983c9057a45       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   848676623a853       csi-hostpath-resizer-0                     kube-system
	b45bc2cf489d6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              patch                                    0                   a2e98578048b2       ingress-nginx-admission-patch-cz8cl        ingress-nginx
	f1d40dd3d4b9b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   bfe4e2b3832ac       cloud-spanner-emulator-6f9fcf858b-rkrsf    default
	0c61d36c7a511       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   63d904695c088       snapshot-controller-7d9fbc56b8-dtn6d       kube-system
	3ca4db6a78c91       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   5196234c7f03f       kube-ingress-dns-minikube                  kube-system
	0c93957738cd9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   ec5b07abb4475       ingress-nginx-admission-create-9p9nh       ingress-nginx
	17bb62b0a1fd7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   d50e67db923b6       snapshot-controller-7d9fbc56b8-s9tcg       kube-system
	d8aa5125c640c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   1cb3d0792a284       metrics-server-85b7d694d7-prnnw            kube-system
	27c39a02e207f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   2ebb2c6548d34       local-path-provisioner-648f6765c9-hbdxm    local-path-storage
	50086071e8f3c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   f0ecc018e9534       yakd-dashboard-5ff678cb9-2phbk             yakd-dashboard
	a04a62ebc8233       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   b754a0cbe227d       registry-6b586f9694-snxbp                  kube-system
	4543ce964ae98       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   d87a7fe56fe87       storage-provisioner                        kube-system
	1709e904357b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   780c2b0afd478       coredns-66bc5c9577-b4lj6                   kube-system
	60697a03970e4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   252750f24ec16       kindnet-blpd7                              kube-system
	2cdb176563f24       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   50698c6e5eba8       kube-proxy-pg4bh                           kube-system
	d105de9b64fd9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             4 minutes ago            Running             kube-apiserver                           0                   e70ef115896f2       kube-apiserver-addons-800763               kube-system
	33e51fd6419d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             4 minutes ago            Running             kube-scheduler                           0                   7d18e67da7671       kube-scheduler-addons-800763               kube-system
	7cfdcbc77bbdd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             4 minutes ago            Running             etcd                                     0                   08cef7e93c9dd       etcd-addons-800763                         kube-system
	7d2cf8e9b9a68       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             4 minutes ago            Running             kube-controller-manager                  0                   49581036023d5       kube-controller-manager-addons-800763      kube-system
	
	
	==> coredns [1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f] <==
	[INFO] 10.244.0.17:57801 - 39317 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002093038s
	[INFO] 10.244.0.17:57801 - 59765 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000437322s
	[INFO] 10.244.0.17:57801 - 43242 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000488022s
	[INFO] 10.244.0.17:44372 - 50193 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154488s
	[INFO] 10.244.0.17:44372 - 49988 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000253238s
	[INFO] 10.244.0.17:41539 - 43263 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118639s
	[INFO] 10.244.0.17:41539 - 42802 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000159346s
	[INFO] 10.244.0.17:54013 - 55553 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101737s
	[INFO] 10.244.0.17:54013 - 55373 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136158s
	[INFO] 10.244.0.17:48906 - 64736 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00134126s
	[INFO] 10.244.0.17:48906 - 64916 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001493818s
	[INFO] 10.244.0.17:50827 - 54860 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000126484s
	[INFO] 10.244.0.17:50827 - 54671 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173819s
	[INFO] 10.244.0.21:33288 - 27293 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00040675s
	[INFO] 10.244.0.21:57709 - 57858 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000259285s
	[INFO] 10.244.0.21:58593 - 58638 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147899s
	[INFO] 10.244.0.21:57486 - 23006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125212s
	[INFO] 10.244.0.21:58345 - 21233 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000159961s
	[INFO] 10.244.0.21:56598 - 54193 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015361s
	[INFO] 10.244.0.21:47737 - 24645 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002216757s
	[INFO] 10.244.0.21:37859 - 34830 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001974752s
	[INFO] 10.244.0.21:37185 - 60747 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002103811s
	[INFO] 10.244.0.21:40060 - 15453 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00227576s
	[INFO] 10.244.0.23:55364 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000276393s
	[INFO] 10.244.0.23:53897 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101606s
	
	
	==> describe nodes <==
	Name:               addons-800763
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-800763
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=addons-800763
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_32_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-800763
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-800763"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-800763
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:37:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:32:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:32:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:32:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:36:16 +0000   Sat, 15 Nov 2025 10:33:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-800763
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                f6721dac-01aa-47dc-9bba-4ca8229436ed
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-6f9fcf858b-rkrsf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  default                     hello-world-app-5d498dc89-2nrrp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-xqsc5                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  gcp-auth                    gcp-auth-78565c9fb4-st9nz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-krqbs    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m38s
	  kube-system                 coredns-66bc5c9577-b4lj6                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m44s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpathplugin-b4dh9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 etcd-addons-800763                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m49s
	  kube-system                 kindnet-blpd7                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m44s
	  kube-system                 kube-apiserver-addons-800763                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-controller-manager-addons-800763       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-proxy-pg4bh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-scheduler-addons-800763                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 metrics-server-85b7d694d7-prnnw             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m39s
	  kube-system                 nvidia-device-plugin-daemonset-hc67v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-6b586f9694-snxbp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 registry-creds-764b6fb674-66shb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-proxy-frc5j                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 snapshot-controller-7d9fbc56b8-dtn6d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 snapshot-controller-7d9fbc56b8-s9tcg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  local-path-storage          local-path-provisioner-648f6765c9-hbdxm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2phbk              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m42s  kube-proxy       
	  Normal   Starting                 4m50s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m50s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m50s  kubelet          Node addons-800763 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m50s  kubelet          Node addons-800763 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m50s  kubelet          Node addons-800763 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m45s  node-controller  Node addons-800763 event: Registered Node addons-800763 in Controller
	  Normal   NodeReady                4m3s   kubelet          Node addons-800763 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f] <==
	{"level":"warn","ts":"2025-11-15T10:32:27.590214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.616982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.660905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.701304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.736900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.781919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.812902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.849721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.889124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.930865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.001046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.015717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.054713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.070801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.183686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.209139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.221135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.249826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.352939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:44.276978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:44.295523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.361256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.369406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.390442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.406444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56552","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [6a7ca91f997133c906da4b013fa65949a3c035d9fda4015f3181d5667f3cf1ff] <==
	2025/11/15 10:34:10 GCP Auth Webhook started!
	2025/11/15 10:34:19 Ready to marshal response ...
	2025/11/15 10:34:19 Ready to write response ...
	2025/11/15 10:34:20 Ready to marshal response ...
	2025/11/15 10:34:20 Ready to write response ...
	2025/11/15 10:34:20 Ready to marshal response ...
	2025/11/15 10:34:20 Ready to write response ...
	2025/11/15 10:34:42 Ready to marshal response ...
	2025/11/15 10:34:42 Ready to write response ...
	2025/11/15 10:34:45 Ready to marshal response ...
	2025/11/15 10:34:45 Ready to write response ...
	2025/11/15 10:34:58 Ready to marshal response ...
	2025/11/15 10:34:58 Ready to write response ...
	2025/11/15 10:35:04 Ready to marshal response ...
	2025/11/15 10:35:04 Ready to write response ...
	2025/11/15 10:35:24 Ready to marshal response ...
	2025/11/15 10:35:24 Ready to write response ...
	2025/11/15 10:35:24 Ready to marshal response ...
	2025/11/15 10:35:24 Ready to write response ...
	2025/11/15 10:35:32 Ready to marshal response ...
	2025/11/15 10:35:32 Ready to write response ...
	2025/11/15 10:37:19 Ready to marshal response ...
	2025/11/15 10:37:19 Ready to write response ...
	
	
	==> kernel <==
	 10:37:21 up  2:19,  0 user,  load average: 0.49, 1.85, 2.91
	Linux addons-800763 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108] <==
	I1115 10:35:18.405254       1 main.go:301] handling current node
	I1115 10:35:28.400724       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:35:28.400785       1 main.go:301] handling current node
	I1115 10:35:38.400942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:35:38.400977       1 main.go:301] handling current node
	I1115 10:35:48.400720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:35:48.400768       1 main.go:301] handling current node
	I1115 10:35:58.400714       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:35:58.400750       1 main.go:301] handling current node
	I1115 10:36:08.401714       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:36:08.401747       1 main.go:301] handling current node
	I1115 10:36:18.400994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:36:18.401028       1 main.go:301] handling current node
	I1115 10:36:28.402297       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:36:28.402350       1 main.go:301] handling current node
	I1115 10:36:38.400734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:36:38.400767       1 main.go:301] handling current node
	I1115 10:36:48.404957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:36:48.404992       1 main.go:301] handling current node
	I1115 10:36:58.409598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:36:58.409635       1 main.go:301] handling current node
	I1115 10:37:08.405527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:37:08.405562       1 main.go:301] handling current node
	I1115 10:37:18.404914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:37:18.404959       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0] <==
	I1115 10:32:47.079591       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.130.229"}
	W1115 10:33:06.355111       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 10:33:06.369426       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 10:33:06.390384       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 10:33:06.406068       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1115 10:33:18.794605       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.130.229:443: connect: connection refused
	E1115 10:33:18.794648       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.130.229:443: connect: connection refused" logger="UnhandledError"
	W1115 10:33:18.794826       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.130.229:443: connect: connection refused
	E1115 10:33:18.794906       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.130.229:443: connect: connection refused" logger="UnhandledError"
	W1115 10:33:18.899824       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.130.229:443: connect: connection refused
	E1115 10:33:18.899867       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.130.229:443: connect: connection refused" logger="UnhandledError"
	E1115 10:33:31.225009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.214.220:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.214.220:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.214.220:443: connect: connection refused" logger="UnhandledError"
	W1115 10:33:31.232207       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 10:33:31.235670       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 10:33:31.281789       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1115 10:33:31.287654       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 10:34:30.267754       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45434: use of closed network connection
	I1115 10:34:56.742358       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1115 10:34:58.254431       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1115 10:34:58.562502       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.199.102"}
	E1115 10:35:12.096121       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1115 10:37:19.607506       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.27.57"}
	
	
	==> kube-controller-manager [7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15] <==
	I1115 10:32:36.343804       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:32:36.343834       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:32:36.355332       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-800763" podCIDRs=["10.244.0.0/24"]
	I1115 10:32:36.364547       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:32:36.369743       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:32:36.374289       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:32:36.375502       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:32:36.376094       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:32:36.376155       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:32:36.376681       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:32:36.376817       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:32:36.376691       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:32:36.377015       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:32:36.378847       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:32:36.379119       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:32:36.389154       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E1115 10:32:42.251039       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 10:33:06.348559       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 10:33:06.348713       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 10:33:06.348782       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 10:33:06.376939       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 10:33:06.381846       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 10:33:06.449300       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:33:06.482370       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:33:21.337396       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e] <==
	I1115 10:32:38.361253       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:32:38.462528       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:32:38.564311       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:32:38.564348       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 10:32:38.564439       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:32:38.655991       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:32:38.656056       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:32:38.678157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:32:38.678495       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:32:38.678518       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:32:38.704009       1 config.go:200] "Starting service config controller"
	I1115 10:32:38.704033       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:32:38.704153       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:32:38.704167       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:32:38.704583       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:32:38.704598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:32:38.709026       1 config.go:309] "Starting node config controller"
	I1115 10:32:38.709056       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:32:38.804290       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:32:38.804381       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:32:38.804696       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:32:38.833145       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44] <==
	E1115 10:32:29.386525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:32:29.386560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:32:29.393529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:32:29.393681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:32:29.394279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:32:29.394289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:32:29.394337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:32:29.394382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:32:29.394457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:32:29.394484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:32:29.394508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:32:29.394532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:32:29.394563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:32:29.394578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:32:30.303647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:32:30.305090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:32:30.321728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:32:30.345550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:32:30.468423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:32:30.492021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:32:30.499143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:32:30.506454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:32:30.519096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:32:30.809020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1115 10:32:33.273243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:20 addons-800763 kubelet[1288]: I1115 10:36:20.177570    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-66shb" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:36:20 addons-800763 kubelet[1288]: I1115 10:36:20.177638    1288 scope.go:117] "RemoveContainer" containerID="9c1cf5bdbdac59c5080122885cf10a7734c3806ce7f2449feb8b8c88cae00ae9"
	Nov 15 10:36:20 addons-800763 kubelet[1288]: E1115 10:36:20.177829    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-66shb_kube-system(ee5928cb-0522-4d75-86c9-719f510099ea)\"" pod="kube-system/registry-creds-764b6fb674-66shb" podUID="ee5928cb-0522-4d75-86c9-719f510099ea"
	Nov 15 10:36:28 addons-800763 kubelet[1288]: I1115 10:36:28.723431    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hc67v" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:36:31 addons-800763 kubelet[1288]: I1115 10:36:31.724399    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-66shb" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:36:31 addons-800763 kubelet[1288]: I1115 10:36:31.725059    1288 scope.go:117] "RemoveContainer" containerID="9c1cf5bdbdac59c5080122885cf10a7734c3806ce7f2449feb8b8c88cae00ae9"
	Nov 15 10:36:31 addons-800763 kubelet[1288]: E1115 10:36:31.725380    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-66shb_kube-system(ee5928cb-0522-4d75-86c9-719f510099ea)\"" pod="kube-system/registry-creds-764b6fb674-66shb" podUID="ee5928cb-0522-4d75-86c9-719f510099ea"
	Nov 15 10:36:31 addons-800763 kubelet[1288]: I1115 10:36:31.726066    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-frc5j" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:36:31 addons-800763 kubelet[1288]: I1115 10:36:31.973620    1288 scope.go:117] "RemoveContainer" containerID="73c00cd50254c79db78eb2945470bc7fa8c748a6227e0ff9dcdb6822de72e126"
	Nov 15 10:36:31 addons-800763 kubelet[1288]: I1115 10:36:31.982423    1288 scope.go:117] "RemoveContainer" containerID="ad75a0b11052de5df97c8c75bbbeb2c52b55e23ce2a3f71db233ec83c39bf415"
	Nov 15 10:36:45 addons-800763 kubelet[1288]: I1115 10:36:45.722771    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-66shb" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:36:45 addons-800763 kubelet[1288]: I1115 10:36:45.722833    1288 scope.go:117] "RemoveContainer" containerID="9c1cf5bdbdac59c5080122885cf10a7734c3806ce7f2449feb8b8c88cae00ae9"
	Nov 15 10:36:45 addons-800763 kubelet[1288]: E1115 10:36:45.722991    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-66shb_kube-system(ee5928cb-0522-4d75-86c9-719f510099ea)\"" pod="kube-system/registry-creds-764b6fb674-66shb" podUID="ee5928cb-0522-4d75-86c9-719f510099ea"
	Nov 15 10:36:58 addons-800763 kubelet[1288]: I1115 10:36:58.723406    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-66shb" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:36:58 addons-800763 kubelet[1288]: I1115 10:36:58.723487    1288 scope.go:117] "RemoveContainer" containerID="9c1cf5bdbdac59c5080122885cf10a7734c3806ce7f2449feb8b8c88cae00ae9"
	Nov 15 10:36:58 addons-800763 kubelet[1288]: E1115 10:36:58.723651    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-66shb_kube-system(ee5928cb-0522-4d75-86c9-719f510099ea)\"" pod="kube-system/registry-creds-764b6fb674-66shb" podUID="ee5928cb-0522-4d75-86c9-719f510099ea"
	Nov 15 10:37:06 addons-800763 kubelet[1288]: I1115 10:37:06.723176    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-snxbp" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:37:09 addons-800763 kubelet[1288]: I1115 10:37:09.723137    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-66shb" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:37:09 addons-800763 kubelet[1288]: I1115 10:37:09.723210    1288 scope.go:117] "RemoveContainer" containerID="9c1cf5bdbdac59c5080122885cf10a7734c3806ce7f2449feb8b8c88cae00ae9"
	Nov 15 10:37:10 addons-800763 kubelet[1288]: I1115 10:37:10.357160    1288 scope.go:117] "RemoveContainer" containerID="9c1cf5bdbdac59c5080122885cf10a7734c3806ce7f2449feb8b8c88cae00ae9"
	Nov 15 10:37:10 addons-800763 kubelet[1288]: I1115 10:37:10.357382    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-66shb" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:37:10 addons-800763 kubelet[1288]: I1115 10:37:10.357434    1288 scope.go:117] "RemoveContainer" containerID="d683294c30f3cf945bcaf5d8c62baade20da0e8b5d2ded7a16cd715161bf4abc"
	Nov 15 10:37:10 addons-800763 kubelet[1288]: E1115 10:37:10.357676    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-66shb_kube-system(ee5928cb-0522-4d75-86c9-719f510099ea)\"" pod="kube-system/registry-creds-764b6fb674-66shb" podUID="ee5928cb-0522-4d75-86c9-719f510099ea"
	Nov 15 10:37:19 addons-800763 kubelet[1288]: I1115 10:37:19.495741    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgp6\" (UniqueName: \"kubernetes.io/projected/99695bd9-35e3-4829-9a4d-467d88af55a6-kube-api-access-mhgp6\") pod \"hello-world-app-5d498dc89-2nrrp\" (UID: \"99695bd9-35e3-4829-9a4d-467d88af55a6\") " pod="default/hello-world-app-5d498dc89-2nrrp"
	Nov 15 10:37:19 addons-800763 kubelet[1288]: I1115 10:37:19.496271    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/99695bd9-35e3-4829-9a4d-467d88af55a6-gcp-creds\") pod \"hello-world-app-5d498dc89-2nrrp\" (UID: \"99695bd9-35e3-4829-9a4d-467d88af55a6\") " pod="default/hello-world-app-5d498dc89-2nrrp"
	
	
	==> storage-provisioner [4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d] <==
	W1115 10:36:57.496939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:59.499612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:59.504435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:01.507829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:01.515164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:03.518468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:03.524045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:05.528640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:05.533522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:07.538096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:07.544252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:09.547779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:09.555654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:11.559201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:11.566128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:13.568613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:13.574194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:15.577592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:15.582131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:17.585732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:17.594156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:19.598218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:19.621630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:21.633805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:21.643268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-800763 -n addons-800763
helpers_test.go:269: (dbg) Run:  kubectl --context addons-800763 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-800763 describe pod ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-800763 describe pod ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl: exit status 1 (111.884968ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9p9nh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cz8cl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-800763 describe pod ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (323.097012ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:37:22.662197  596844 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:37:22.663111  596844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:22.663216  596844 out.go:374] Setting ErrFile to fd 2...
	I1115 10:37:22.663230  596844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:22.663532  596844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:37:22.663835  596844 mustload.go:66] Loading cluster: addons-800763
	I1115 10:37:22.664227  596844 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:22.664242  596844 addons.go:607] checking whether the cluster is paused
	I1115 10:37:22.664351  596844 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:22.664367  596844 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:37:22.664816  596844 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:37:22.682864  596844 ssh_runner.go:195] Run: systemctl --version
	I1115 10:37:22.682929  596844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:37:22.701252  596844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:37:22.815996  596844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:37:22.816124  596844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:37:22.899577  596844 cri.go:89] found id: "d683294c30f3cf945bcaf5d8c62baade20da0e8b5d2ded7a16cd715161bf4abc"
	I1115 10:37:22.899650  596844 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:37:22.899669  596844 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:37:22.899687  596844 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:37:22.899720  596844 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:37:22.899744  596844 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:37:22.899764  596844 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:37:22.899783  596844 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:37:22.899816  596844 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:37:22.899844  596844 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:37:22.899862  596844 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:37:22.899889  596844 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:37:22.899911  596844 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:37:22.899931  596844 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:37:22.899950  596844 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:37:22.899979  596844 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:37:22.900014  596844 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:37:22.900035  596844 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:37:22.900067  596844 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:37:22.900088  596844 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:37:22.900110  596844 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:37:22.900129  596844 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:37:22.900164  596844 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:37:22.900181  596844 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:37:22.900200  596844 cri.go:89] found id: ""
	I1115 10:37:22.900284  596844 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:22.916039  596844 out.go:203] 
	W1115 10:37:22.918992  596844 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:37:22.919083  596844 out.go:285] * 
	* 
	W1115 10:37:22.926209  596844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:37:22.929177  596844 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable ingress --alsologtostderr -v=1: exit status 11 (270.306401ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:37:22.986823  596957 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:37:22.987644  596957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:22.987685  596957 out.go:374] Setting ErrFile to fd 2...
	I1115 10:37:22.987697  596957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:22.987965  596957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:37:22.988259  596957 mustload.go:66] Loading cluster: addons-800763
	I1115 10:37:22.988618  596957 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:22.988628  596957 addons.go:607] checking whether the cluster is paused
	I1115 10:37:22.988729  596957 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:22.988738  596957 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:37:22.989341  596957 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:37:23.006205  596957 ssh_runner.go:195] Run: systemctl --version
	I1115 10:37:23.006268  596957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:37:23.025379  596957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:37:23.132829  596957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:37:23.132981  596957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:37:23.165669  596957 cri.go:89] found id: "d683294c30f3cf945bcaf5d8c62baade20da0e8b5d2ded7a16cd715161bf4abc"
	I1115 10:37:23.165732  596957 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:37:23.165747  596957 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:37:23.165752  596957 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:37:23.165756  596957 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:37:23.165761  596957 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:37:23.165764  596957 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:37:23.165767  596957 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:37:23.165770  596957 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:37:23.165776  596957 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:37:23.165780  596957 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:37:23.165783  596957 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:37:23.165786  596957 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:37:23.165790  596957 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:37:23.165794  596957 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:37:23.165798  596957 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:37:23.165804  596957 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:37:23.165810  596957 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:37:23.165813  596957 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:37:23.165816  596957 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:37:23.165822  596957 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:37:23.165828  596957 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:37:23.165831  596957 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:37:23.165835  596957 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:37:23.165838  596957 cri.go:89] found id: ""
	I1115 10:37:23.165888  596957 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:23.188486  596957 out.go:203] 
	W1115 10:37:23.191417  596957 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:37:23.191446  596957 out.go:285] * 
	* 
	W1115 10:37:23.197189  596957 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:37:23.200284  596957 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xqsc5" [b59e1817-a887-426e-b7e5-e9674f388838] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003446932s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (283.477046ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:34:57.652072  594461 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:57.652851  594461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.652942  594461 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:57.652963  594461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:57.653258  594461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:34:57.653597  594461 mustload.go:66] Loading cluster: addons-800763
	I1115 10:34:57.654028  594461 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:57.654072  594461 addons.go:607] checking whether the cluster is paused
	I1115 10:34:57.654202  594461 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:57.654237  594461 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:34:57.654703  594461 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:34:57.671805  594461 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:57.671857  594461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:34:57.689825  594461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:34:57.796219  594461 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:57.796308  594461 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:57.837592  594461 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:34:57.837614  594461 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:34:57.837619  594461 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:34:57.837623  594461 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:34:57.837636  594461 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:34:57.837641  594461 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:34:57.837644  594461 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:34:57.837648  594461 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:34:57.837651  594461 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:34:57.837663  594461 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:34:57.837667  594461 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:34:57.837670  594461 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:34:57.837673  594461 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:34:57.837678  594461 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:34:57.837681  594461 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:34:57.837690  594461 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:34:57.837693  594461 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:34:57.837698  594461 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:34:57.837702  594461 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:34:57.837705  594461 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:34:57.837709  594461 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:34:57.837713  594461 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:34:57.837716  594461 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:34:57.837718  594461 cri.go:89] found id: ""
	I1115 10:34:57.837769  594461 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:34:57.861690  594461 out.go:203] 
	W1115 10:34:57.864669  594461 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:34:57.864695  594461 out.go:285] * 
	* 
	W1115 10:34:57.871004  594461 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:34:57.874725  594461 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.853306ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003855349s
addons_test.go:463: (dbg) Run:  kubectl --context addons-800763 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (277.736376ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:34:51.375873  594372 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:51.376897  594372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:51.376916  594372 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:51.376921  594372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:51.377213  594372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:34:51.377566  594372 mustload.go:66] Loading cluster: addons-800763
	I1115 10:34:51.377988  594372 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:51.378008  594372 addons.go:607] checking whether the cluster is paused
	I1115 10:34:51.378118  594372 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:51.378133  594372 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:34:51.378596  594372 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:34:51.399872  594372 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:51.399932  594372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:34:51.422420  594372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:34:51.527655  594372 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:51.527775  594372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:51.562644  594372 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:34:51.562666  594372 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:34:51.562671  594372 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:34:51.562675  594372 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:34:51.562678  594372 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:34:51.562682  594372 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:34:51.562685  594372 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:34:51.562692  594372 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:34:51.562696  594372 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:34:51.562702  594372 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:34:51.562706  594372 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:34:51.562709  594372 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:34:51.562712  594372 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:34:51.562715  594372 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:34:51.562718  594372 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:34:51.562724  594372 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:34:51.562732  594372 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:34:51.562735  594372 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:34:51.562739  594372 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:34:51.562741  594372 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:34:51.562746  594372 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:34:51.562749  594372 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:34:51.562753  594372 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:34:51.562756  594372 cri.go:89] found id: ""
	I1115 10:34:51.562812  594372 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:34:51.579896  594372 out.go:203] 
	W1115 10:34:51.583048  594372 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:34:51.583083  594372 out.go:285] * 
	* 
	W1115 10:34:51.588744  594372 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:34:51.591924  594372 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1115 10:34:34.225678  586561 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1115 10:34:34.229031  586561 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 10:34:34.229060  586561 kapi.go:107] duration metric: took 3.391983ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.402264ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-800763 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-800763 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e9714877-d4ec-4a72-afee-4ce0366f5a1e] Pending
helpers_test.go:352: "task-pv-pod" [e9714877-d4ec-4a72-afee-4ce0366f5a1e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [e9714877-d4ec-4a72-afee-4ce0366f5a1e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.0048852s
addons_test.go:572: (dbg) Run:  kubectl --context addons-800763 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-800763 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-800763 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-800763 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-800763 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-800763 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-800763 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [92442a3b-0238-40b7-95b6-1dabb0894837] Pending
helpers_test.go:352: "task-pv-pod-restore" [92442a3b-0238-40b7-95b6-1dabb0894837] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [92442a3b-0238-40b7-95b6-1dabb0894837] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00383525s
addons_test.go:614: (dbg) Run:  kubectl --context addons-800763 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-800763 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-800763 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (290.11658ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:12.637549  595108 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:12.638433  595108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:12.638456  595108 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:12.638464  595108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:12.638762  595108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:35:12.639080  595108 mustload.go:66] Loading cluster: addons-800763
	I1115 10:35:12.639488  595108 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:12.639507  595108 addons.go:607] checking whether the cluster is paused
	I1115 10:35:12.639620  595108 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:12.639633  595108 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:35:12.640183  595108 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:35:12.659616  595108 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:12.659688  595108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:35:12.677238  595108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:35:12.783610  595108 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:12.783716  595108 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:12.819136  595108 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:35:12.819157  595108 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:35:12.819161  595108 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:35:12.819166  595108 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:35:12.819170  595108 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:35:12.819174  595108 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:35:12.819177  595108 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:35:12.819180  595108 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:35:12.819183  595108 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:35:12.819190  595108 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:35:12.819194  595108 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:35:12.819197  595108 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:35:12.819202  595108 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:35:12.819205  595108 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:35:12.819214  595108 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:35:12.819220  595108 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:35:12.819226  595108 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:35:12.819230  595108 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:35:12.819233  595108 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:35:12.819241  595108 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:35:12.819246  595108 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:35:12.819249  595108 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:35:12.819252  595108 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:35:12.819256  595108 cri.go:89] found id: ""
	I1115 10:35:12.819307  595108 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:12.834532  595108 out.go:203] 
	W1115 10:35:12.837443  595108 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:12.837473  595108 out.go:285] * 
	* 
	W1115 10:35:12.843417  595108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:12.846256  595108 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (276.926837ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:12.910097  595150 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:12.910931  595150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:12.911036  595150 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:12.911072  595150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:12.911384  595150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:35:12.911705  595150 mustload.go:66] Loading cluster: addons-800763
	I1115 10:35:12.912143  595150 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:12.912188  595150 addons.go:607] checking whether the cluster is paused
	I1115 10:35:12.912322  595150 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:12.912357  595150 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:35:12.912849  595150 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:35:12.930579  595150 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:12.930645  595150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:35:12.949377  595150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:35:13.057351  595150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:13.057455  595150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:13.093349  595150 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:35:13.093419  595150 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:35:13.093437  595150 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:35:13.093465  595150 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:35:13.093497  595150 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:35:13.093534  595150 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:35:13.093553  595150 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:35:13.093571  595150 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:35:13.093601  595150 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:35:13.093628  595150 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:35:13.093647  595150 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:35:13.093675  595150 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:35:13.093707  595150 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:35:13.093732  595150 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:35:13.093750  595150 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:35:13.093769  595150 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:35:13.093809  595150 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:35:13.093828  595150 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:35:13.093844  595150 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:35:13.093863  595150 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:35:13.093884  595150 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:35:13.093910  595150 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:35:13.093935  595150 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:35:13.093953  595150 cri.go:89] found id: ""
	I1115 10:35:13.094044  595150 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:13.109894  595150 out.go:203] 
	W1115 10:35:13.112807  595150 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:13.112836  595150 out.go:285] * 
	* 
	W1115 10:35:13.118541  595150 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:13.121398  595150 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (38.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-800763 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-800763 --alsologtostderr -v=1: exit status 11 (287.517765ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:34:30.991192  593425 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:30.992012  593425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:30.992031  593425 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:30.992036  593425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:30.992376  593425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:34:30.992748  593425 mustload.go:66] Loading cluster: addons-800763
	I1115 10:34:30.993206  593425 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:30.993227  593425 addons.go:607] checking whether the cluster is paused
	I1115 10:34:30.993382  593425 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:30.993401  593425 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:34:30.993933  593425 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:34:31.014318  593425 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:31.014431  593425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:34:31.033399  593425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:34:31.144172  593425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:31.144255  593425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:31.185476  593425 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:34:31.185497  593425 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:34:31.185502  593425 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:34:31.185507  593425 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:34:31.185510  593425 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:34:31.185519  593425 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:34:31.185523  593425 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:34:31.185526  593425 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:34:31.185529  593425 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:34:31.185536  593425 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:34:31.185539  593425 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:34:31.185542  593425 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:34:31.185546  593425 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:34:31.185549  593425 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:34:31.185552  593425 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:34:31.185557  593425 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:34:31.185560  593425 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:34:31.185564  593425 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:34:31.185567  593425 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:34:31.185571  593425 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:34:31.185585  593425 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:34:31.185588  593425 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:34:31.185591  593425 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:34:31.185594  593425 cri.go:89] found id: ""
	I1115 10:34:31.185652  593425 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:34:31.202061  593425 out.go:203] 
	W1115 10:34:31.205039  593425 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:34:31.205064  593425 out.go:285] * 
	* 
	W1115 10:34:31.210838  593425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:34:31.213882  593425 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-800763 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-800763
helpers_test.go:243: (dbg) docker inspect addons-800763:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450",
	        "Created": "2025-11-15T10:32:03.5118468Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:32:03.580273666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/hostname",
	        "HostsPath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/hosts",
	        "LogPath": "/var/lib/docker/containers/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450-json.log",
	        "Name": "/addons-800763",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-800763:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-800763",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450",
	                "LowerDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25857c918a19ef2ae9a371ad87df7bd87a6ebd70600dd8906e3cdbdd237174f0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-800763",
	                "Source": "/var/lib/docker/volumes/addons-800763/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-800763",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-800763",
	                "name.minikube.sigs.k8s.io": "addons-800763",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b871a302261ad83ac50b6e0e0624dd37e10bcad8ef4b3002c71c77a96a6ce618",
	            "SandboxKey": "/var/run/docker/netns/b871a302261a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-800763": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:62:f9:92:c8:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d68a6b13710afe5f0b1c96904b827fbb9442383b2ff3417bc4aa15f1ca8ad42e",
	                    "EndpointID": "be62c7da60fe1d40d6ad7b2a7985994151bbbcd09d99d017e889f738ffe7d8e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-800763",
	                        "b45b50a37343"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-800763 -n addons-800763
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-800763 logs -n 25: (1.606016444s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-148158 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-148158   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ delete  │ -p download-only-148158                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-148158   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ start   │ -o=json --download-only -p download-only-948757 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-948757   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ delete  │ -p download-only-948757                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-948757   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ delete  │ -p download-only-148158                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-148158   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ delete  │ -p download-only-948757                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-948757   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ start   │ --download-only -p download-docker-855751 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-855751 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ delete  │ -p download-docker-855751                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-855751 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ start   │ --download-only -p binary-mirror-014145 --alsologtostderr --binary-mirror http://127.0.0.1:33887 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-014145   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ delete  │ -p binary-mirror-014145                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-014145   │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable dashboard -p addons-800763                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ addons  │ disable dashboard -p addons-800763                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ start   │ -p addons-800763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:34 UTC │
	│ addons  │ addons-800763 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ addons-800763 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ addons  │ enable headlamp -p addons-800763 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-800763          │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:31:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:31:37.286687  587312 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:31:37.286915  587312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:37.286947  587312 out.go:374] Setting ErrFile to fd 2...
	I1115 10:31:37.286968  587312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:37.287236  587312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:31:37.287715  587312 out.go:368] Setting JSON to false
	I1115 10:31:37.288578  587312 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8048,"bootTime":1763194649,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:31:37.288676  587312 start.go:143] virtualization:  
	I1115 10:31:37.292002  587312 out.go:179] * [addons-800763] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:31:37.295880  587312 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:31:37.296007  587312 notify.go:221] Checking for updates...
	I1115 10:31:37.301811  587312 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:31:37.304749  587312 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:31:37.307587  587312 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:31:37.310504  587312 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:31:37.313330  587312 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:31:37.316334  587312 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:31:37.339913  587312 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:31:37.340029  587312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:37.405700  587312 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 10:31:37.396826605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:37.405814  587312 docker.go:319] overlay module found
	I1115 10:31:37.408979  587312 out.go:179] * Using the docker driver based on user configuration
	I1115 10:31:37.411735  587312 start.go:309] selected driver: docker
	I1115 10:31:37.411755  587312 start.go:930] validating driver "docker" against <nil>
	I1115 10:31:37.411771  587312 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:31:37.412518  587312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:37.464977  587312 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 10:31:37.455360743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:37.465141  587312 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:31:37.465384  587312 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:31:37.468270  587312 out.go:179] * Using Docker driver with root privileges
	I1115 10:31:37.471115  587312 cni.go:84] Creating CNI manager for ""
	I1115 10:31:37.471179  587312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:31:37.471192  587312 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:31:37.471264  587312 start.go:353] cluster config:
	{Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 10:31:37.474372  587312 out.go:179] * Starting "addons-800763" primary control-plane node in "addons-800763" cluster
	I1115 10:31:37.477098  587312 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:31:37.480102  587312 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:31:37.482984  587312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:31:37.483041  587312 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:31:37.483054  587312 cache.go:65] Caching tarball of preloaded images
	I1115 10:31:37.483064  587312 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:31:37.483140  587312 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:31:37.483154  587312 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:31:37.483489  587312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/config.json ...
	I1115 10:31:37.483518  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/config.json: {Name:mkf94e9d4ef8eeb627c4a5c077a1fd07c2af97b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:37.499704  587312 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 10:31:37.499831  587312 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 10:31:37.499850  587312 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 10:31:37.499854  587312 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 10:31:37.499862  587312 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 10:31:37.499867  587312 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 10:31:55.354260  587312 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 10:31:55.354300  587312 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:31:55.354339  587312 start.go:360] acquireMachinesLock for addons-800763: {Name:mkeeb6cf50ec492af8c3057917054764961dc2ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:31:55.354485  587312 start.go:364] duration metric: took 121.635µs to acquireMachinesLock for "addons-800763"
	I1115 10:31:55.354520  587312 start.go:93] Provisioning new machine with config: &{Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:31:55.354610  587312 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:31:55.358051  587312 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 10:31:55.358313  587312 start.go:159] libmachine.API.Create for "addons-800763" (driver="docker")
	I1115 10:31:55.358352  587312 client.go:173] LocalClient.Create starting
	I1115 10:31:55.358467  587312 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:31:55.473535  587312 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:31:56.318648  587312 cli_runner.go:164] Run: docker network inspect addons-800763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:31:56.335517  587312 cli_runner.go:211] docker network inspect addons-800763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:31:56.335631  587312 network_create.go:284] running [docker network inspect addons-800763] to gather additional debugging logs...
	I1115 10:31:56.335659  587312 cli_runner.go:164] Run: docker network inspect addons-800763
	W1115 10:31:56.351975  587312 cli_runner.go:211] docker network inspect addons-800763 returned with exit code 1
	I1115 10:31:56.352004  587312 network_create.go:287] error running [docker network inspect addons-800763]: docker network inspect addons-800763: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-800763 not found
	I1115 10:31:56.352019  587312 network_create.go:289] output of [docker network inspect addons-800763]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-800763 not found
	
	** /stderr **
	I1115 10:31:56.352129  587312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:31:56.368577  587312 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019afa50}
	I1115 10:31:56.368625  587312 network_create.go:124] attempt to create docker network addons-800763 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 10:31:56.368690  587312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-800763 addons-800763
	I1115 10:31:56.425254  587312 network_create.go:108] docker network addons-800763 192.168.49.0/24 created
	I1115 10:31:56.425288  587312 kic.go:121] calculated static IP "192.168.49.2" for the "addons-800763" container
	I1115 10:31:56.425361  587312 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:31:56.440738  587312 cli_runner.go:164] Run: docker volume create addons-800763 --label name.minikube.sigs.k8s.io=addons-800763 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:31:56.458783  587312 oci.go:103] Successfully created a docker volume addons-800763
	I1115 10:31:56.458873  587312 cli_runner.go:164] Run: docker run --rm --name addons-800763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-800763 --entrypoint /usr/bin/test -v addons-800763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:31:58.529630  587312 cli_runner.go:217] Completed: docker run --rm --name addons-800763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-800763 --entrypoint /usr/bin/test -v addons-800763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.070716474s)
	I1115 10:31:58.529663  587312 oci.go:107] Successfully prepared a docker volume addons-800763
	I1115 10:31:58.529720  587312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:31:58.529735  587312 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:31:58.529797  587312 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-800763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:32:03.436836  587312 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-800763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.906982785s)
	I1115 10:32:03.436887  587312 kic.go:203] duration metric: took 4.907146176s to extract preloaded images to volume ...
	W1115 10:32:03.437035  587312 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:32:03.437161  587312 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:32:03.496409  587312 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-800763 --name addons-800763 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-800763 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-800763 --network addons-800763 --ip 192.168.49.2 --volume addons-800763:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:32:03.795587  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Running}}
	I1115 10:32:03.815598  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:03.841313  587312 cli_runner.go:164] Run: docker exec addons-800763 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:32:03.891501  587312 oci.go:144] the created container "addons-800763" has a running status.
	I1115 10:32:03.891536  587312 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa...
	I1115 10:32:04.282914  587312 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:32:04.307305  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:04.338932  587312 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:32:04.338953  587312 kic_runner.go:114] Args: [docker exec --privileged addons-800763 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:32:04.400472  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:04.427095  587312 machine.go:94] provisionDockerMachine start ...
	I1115 10:32:04.427207  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:04.458334  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:04.458653  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:04.458663  587312 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:32:04.461063  587312 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44640->127.0.0.1:33509: read: connection reset by peer
	I1115 10:32:07.612377  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-800763
	
	I1115 10:32:07.612397  587312 ubuntu.go:182] provisioning hostname "addons-800763"
	I1115 10:32:07.612459  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:07.629751  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:07.630076  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:07.630093  587312 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-800763 && echo "addons-800763" | sudo tee /etc/hostname
	I1115 10:32:07.790030  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-800763
	
	I1115 10:32:07.790108  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:07.808194  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:07.808523  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:07.808540  587312 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-800763' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-800763/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-800763' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:32:07.961125  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:32:07.961157  587312 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:32:07.961184  587312 ubuntu.go:190] setting up certificates
	I1115 10:32:07.961200  587312 provision.go:84] configureAuth start
	I1115 10:32:07.961271  587312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-800763
	I1115 10:32:07.978195  587312 provision.go:143] copyHostCerts
	I1115 10:32:07.978278  587312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:32:07.978399  587312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:32:07.978467  587312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:32:07.978517  587312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.addons-800763 san=[127.0.0.1 192.168.49.2 addons-800763 localhost minikube]
	I1115 10:32:08.200513  587312 provision.go:177] copyRemoteCerts
	I1115 10:32:08.200581  587312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:32:08.200632  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.216823  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.320747  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:32:08.338332  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:32:08.355641  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:32:08.372090  587312 provision.go:87] duration metric: took 410.873505ms to configureAuth
	I1115 10:32:08.372131  587312 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:32:08.372338  587312 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:08.372440  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.389244  587312 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:08.389561  587312 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1115 10:32:08.389582  587312 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:32:08.652418  587312 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:32:08.652439  587312 machine.go:97] duration metric: took 4.225315697s to provisionDockerMachine
	I1115 10:32:08.652449  587312 client.go:176] duration metric: took 13.294087815s to LocalClient.Create
	I1115 10:32:08.652464  587312 start.go:167] duration metric: took 13.294153194s to libmachine.API.Create "addons-800763"
	I1115 10:32:08.652471  587312 start.go:293] postStartSetup for "addons-800763" (driver="docker")
	I1115 10:32:08.652481  587312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:32:08.652559  587312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:32:08.652604  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.669410  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.772907  587312 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:32:08.776190  587312 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:32:08.776219  587312 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:32:08.776230  587312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:32:08.776292  587312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:32:08.776320  587312 start.go:296] duration metric: took 123.842909ms for postStartSetup
	I1115 10:32:08.776621  587312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-800763
	I1115 10:32:08.793293  587312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/config.json ...
	I1115 10:32:08.793580  587312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:32:08.793639  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.813566  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.917776  587312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:32:08.922216  587312 start.go:128] duration metric: took 13.567589009s to createHost
	I1115 10:32:08.922282  587312 start.go:83] releasing machines lock for "addons-800763", held for 13.56778641s
	I1115 10:32:08.922373  587312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-800763
	I1115 10:32:08.939243  587312 ssh_runner.go:195] Run: cat /version.json
	I1115 10:32:08.939292  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.939598  587312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:32:08.939671  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:08.958166  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:08.966553  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:09.148238  587312 ssh_runner.go:195] Run: systemctl --version
	I1115 10:32:09.154572  587312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:32:09.190097  587312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:32:09.194512  587312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:32:09.194602  587312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:32:09.224170  587312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:32:09.224208  587312 start.go:496] detecting cgroup driver to use...
	I1115 10:32:09.224265  587312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:32:09.224339  587312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:32:09.240223  587312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:32:09.252694  587312 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:32:09.252810  587312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:32:09.270093  587312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:32:09.288582  587312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:32:09.406534  587312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:32:09.533739  587312 docker.go:234] disabling docker service ...
	I1115 10:32:09.533882  587312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:32:09.555841  587312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:32:09.568767  587312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:32:09.686368  587312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:32:09.799151  587312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:32:09.812975  587312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:32:09.827839  587312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:32:09.827950  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.837490  587312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:32:09.837611  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.847407  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.856593  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.866118  587312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:32:09.873919  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.883218  587312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.897405  587312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:09.906201  587312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:32:09.913762  587312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:32:09.921148  587312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:10.031234  587312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:32:10.164600  587312 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:32:10.164686  587312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:32:10.168611  587312 start.go:564] Will wait 60s for crictl version
	I1115 10:32:10.168685  587312 ssh_runner.go:195] Run: which crictl
	I1115 10:32:10.172265  587312 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:32:10.197574  587312 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:32:10.197667  587312 ssh_runner.go:195] Run: crio --version
	I1115 10:32:10.229125  587312 ssh_runner.go:195] Run: crio --version
	I1115 10:32:10.259738  587312 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:32:10.262663  587312 cli_runner.go:164] Run: docker network inspect addons-800763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:32:10.278466  587312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:32:10.282416  587312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:10.292039  587312 kubeadm.go:884] updating cluster {Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:32:10.292167  587312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:32:10.292234  587312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:10.328150  587312 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:10.328175  587312 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:32:10.328229  587312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:10.353479  587312 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:10.353503  587312 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:32:10.353513  587312 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 10:32:10.353614  587312 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-800763 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:32:10.353694  587312 ssh_runner.go:195] Run: crio config
	I1115 10:32:10.405394  587312 cni.go:84] Creating CNI manager for ""
	I1115 10:32:10.405464  587312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:10.405506  587312 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:32:10.405560  587312 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-800763 NodeName:addons-800763 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:32:10.405735  587312 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-800763"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:32:10.405849  587312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:32:10.413524  587312 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:32:10.413654  587312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:32:10.421379  587312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 10:32:10.434189  587312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:32:10.448162  587312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1115 10:32:10.461247  587312 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:32:10.464788  587312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:10.475029  587312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:10.589862  587312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:10.605029  587312 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763 for IP: 192.168.49.2
	I1115 10:32:10.605100  587312 certs.go:195] generating shared ca certs ...
	I1115 10:32:10.605131  587312 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:10.605332  587312 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:32:10.891595  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt ...
	I1115 10:32:10.891630  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt: {Name:mkd2d964bbd950f2151022277ba6c34aa6bbfb67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:10.891862  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key ...
	I1115 10:32:10.891879  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key: {Name:mkd189b08acbe67e485f91570547219e89ff9e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:10.891997  587312 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:32:11.558329  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt ...
	I1115 10:32:11.558359  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt: {Name:mkfe97a846764f12a527e0da6693f346b8237e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.558555  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key ...
	I1115 10:32:11.558568  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key: {Name:mkc9b5bc0b27eb79bdc3d49b17edb212aca78dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.558646  587312 certs.go:257] generating profile certs ...
	I1115 10:32:11.558713  587312 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.key
	I1115 10:32:11.558734  587312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt with IP's: []
	I1115 10:32:11.738451  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt ...
	I1115 10:32:11.738482  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: {Name:mke7a97fa7c7255f436f93e6c3f21e4dc04c89c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.739278  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.key ...
	I1115 10:32:11.739293  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.key: {Name:mk971d630f8c84d7694f608589463b38a379183d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:11.739384  587312 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd
	I1115 10:32:11.739402  587312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 10:32:12.319488  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd ...
	I1115 10:32:12.319520  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd: {Name:mk41f7f6e914662c8f5046a8f6123933c0255630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:12.319711  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd ...
	I1115 10:32:12.319725  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd: {Name:mkd6899b5d4e1dedc3a61fb0284b00ab5964bec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:12.320382  587312 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt.7db96ebd -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt
	I1115 10:32:12.320465  587312 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key.7db96ebd -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key
	I1115 10:32:12.320520  587312 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key
	I1115 10:32:12.320540  587312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt with IP's: []
	I1115 10:32:13.281751  587312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt ...
	I1115 10:32:13.281793  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt: {Name:mkddd4d1dd755fec304ec78625965f894e655302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:13.282645  587312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key ...
	I1115 10:32:13.282667  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key: {Name:mkdb6382940922bea3ca5f2f8ef722e65a3541c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:13.282980  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:32:13.283026  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:32:13.283058  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:32:13.283088  587312 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:32:13.283768  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:32:13.303217  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:32:13.323151  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:32:13.342730  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:32:13.361520  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:32:13.379565  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:32:13.396357  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:32:13.413477  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:32:13.430916  587312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:32:13.448913  587312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:32:13.462006  587312 ssh_runner.go:195] Run: openssl version
	I1115 10:32:13.468068  587312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:32:13.476511  587312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:13.480376  587312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:13.480460  587312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:13.522348  587312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:32:13.530682  587312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:32:13.534176  587312 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:32:13.534234  587312 kubeadm.go:401] StartCluster: {Name:addons-800763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-800763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:13.534310  587312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:32:13.534370  587312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:32:13.561007  587312 cri.go:89] found id: ""
	I1115 10:32:13.561094  587312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:32:13.569128  587312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:32:13.576749  587312 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:32:13.576847  587312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:32:13.584793  587312 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:32:13.584812  587312 kubeadm.go:158] found existing configuration files:
	
	I1115 10:32:13.584875  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:32:13.592625  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:32:13.592739  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:32:13.600125  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:32:13.607665  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:32:13.607763  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:32:13.614739  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:32:13.622438  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:32:13.622555  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:32:13.629989  587312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:32:13.637804  587312 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:32:13.637917  587312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:32:13.645085  587312 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:32:13.688140  587312 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:32:13.688504  587312 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:32:13.719238  587312 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:32:13.719343  587312 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:32:13.719399  587312 kubeadm.go:319] OS: Linux
	I1115 10:32:13.719473  587312 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:32:13.719549  587312 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:32:13.719621  587312 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:32:13.719693  587312 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:32:13.719766  587312 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:32:13.719835  587312 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:32:13.719903  587312 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:32:13.719973  587312 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:32:13.720041  587312 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:32:13.787866  587312 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:32:13.788037  587312 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:32:13.788160  587312 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:32:13.797380  587312 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:32:13.803545  587312 out.go:252]   - Generating certificates and keys ...
	I1115 10:32:13.803671  587312 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:32:13.803770  587312 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:32:15.821236  587312 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:32:16.076556  587312 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:32:16.738380  587312 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:32:17.441798  587312 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:32:17.805343  587312 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:32:17.805558  587312 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-800763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:32:18.060448  587312 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:32:18.060669  587312 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-800763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:32:19.321035  587312 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:32:19.953239  587312 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:32:20.465654  587312 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:32:20.465942  587312 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:32:20.894375  587312 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:32:21.108982  587312 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:32:21.580233  587312 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:32:22.441181  587312 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:32:22.753220  587312 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:32:22.754076  587312 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:32:22.757000  587312 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:32:22.760548  587312 out.go:252]   - Booting up control plane ...
	I1115 10:32:22.760666  587312 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:32:22.760757  587312 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:32:22.760835  587312 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:32:22.775423  587312 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:32:22.775780  587312 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:32:22.783695  587312 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:32:22.784665  587312 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:32:22.785080  587312 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:32:22.916632  587312 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:32:22.916760  587312 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:32:24.417347  587312 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500840447s
	I1115 10:32:24.420980  587312 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:32:24.421101  587312 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 10:32:24.421410  587312 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:32:24.421505  587312 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:32:28.532687  587312 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.110964158s
	I1115 10:32:29.383077  587312 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.962056799s
	I1115 10:32:30.922577  587312 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501473576s
	I1115 10:32:30.941926  587312 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:32:30.961614  587312 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:32:30.977197  587312 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:32:30.977610  587312 kubeadm.go:319] [mark-control-plane] Marking the node addons-800763 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:32:30.990175  587312 kubeadm.go:319] [bootstrap-token] Using token: nlvole.sy74anm863filc3q
	I1115 10:32:30.995090  587312 out.go:252]   - Configuring RBAC rules ...
	I1115 10:32:30.995226  587312 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:32:30.999399  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:32:31.008355  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:32:31.015191  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:32:31.020095  587312 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:32:31.025576  587312 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:32:31.330611  587312 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:32:31.787957  587312 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:32:32.329472  587312 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:32:32.330833  587312 kubeadm.go:319] 
	I1115 10:32:32.330914  587312 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:32:32.330923  587312 kubeadm.go:319] 
	I1115 10:32:32.331003  587312 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:32:32.331008  587312 kubeadm.go:319] 
	I1115 10:32:32.331035  587312 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:32:32.331098  587312 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:32:32.331151  587312 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:32:32.331155  587312 kubeadm.go:319] 
	I1115 10:32:32.331212  587312 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:32:32.331216  587312 kubeadm.go:319] 
	I1115 10:32:32.331266  587312 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:32:32.331270  587312 kubeadm.go:319] 
	I1115 10:32:32.331325  587312 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:32:32.331404  587312 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:32:32.331475  587312 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:32:32.331480  587312 kubeadm.go:319] 
	I1115 10:32:32.331569  587312 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:32:32.331649  587312 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:32:32.331654  587312 kubeadm.go:319] 
	I1115 10:32:32.331743  587312 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nlvole.sy74anm863filc3q \
	I1115 10:32:32.331851  587312 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 10:32:32.331873  587312 kubeadm.go:319] 	--control-plane 
	I1115 10:32:32.331895  587312 kubeadm.go:319] 
	I1115 10:32:32.331985  587312 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:32:32.331989  587312 kubeadm.go:319] 
	I1115 10:32:32.332075  587312 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nlvole.sy74anm863filc3q \
	I1115 10:32:32.332183  587312 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 10:32:32.334745  587312 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:32:32.335001  587312 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:32:32.335123  587312 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:32:32.335161  587312 cni.go:84] Creating CNI manager for ""
	I1115 10:32:32.335178  587312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:32.338388  587312 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:32:32.341339  587312 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:32:32.345403  587312 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:32:32.345423  587312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:32:32.358251  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:32:32.643532  587312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:32:32.643739  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:32.643866  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-800763 minikube.k8s.io/updated_at=2025_11_15T10_32_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=addons-800763 minikube.k8s.io/primary=true
	I1115 10:32:32.659471  587312 ops.go:34] apiserver oom_adj: -16
	I1115 10:32:32.759116  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:33.259931  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:33.759788  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:34.259952  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:34.759221  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:35.259195  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:35.759883  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:36.260029  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:36.759987  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:37.260075  587312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:32:37.393123  587312 kubeadm.go:1114] duration metric: took 4.749432812s to wait for elevateKubeSystemPrivileges
	I1115 10:32:37.393178  587312 kubeadm.go:403] duration metric: took 23.858946632s to StartCluster
	I1115 10:32:37.393196  587312 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:37.393365  587312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:32:37.393847  587312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:37.394109  587312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:32:37.394210  587312 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:32:37.394430  587312 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:37.394546  587312 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 10:32:37.394636  587312 addons.go:70] Setting yakd=true in profile "addons-800763"
	I1115 10:32:37.394656  587312 addons.go:239] Setting addon yakd=true in "addons-800763"
	I1115 10:32:37.394681  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.395141  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.395763  587312 addons.go:70] Setting inspektor-gadget=true in profile "addons-800763"
	I1115 10:32:37.395784  587312 addons.go:239] Setting addon inspektor-gadget=true in "addons-800763"
	I1115 10:32:37.395809  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.396223  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.396682  587312 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-800763"
	I1115 10:32:37.396703  587312 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-800763"
	I1115 10:32:37.396727  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.397143  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.400935  587312 addons.go:70] Setting cloud-spanner=true in profile "addons-800763"
	I1115 10:32:37.400982  587312 addons.go:239] Setting addon cloud-spanner=true in "addons-800763"
	I1115 10:32:37.401016  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.401441  587312 addons.go:70] Setting metrics-server=true in profile "addons-800763"
	I1115 10:32:37.401955  587312 addons.go:239] Setting addon metrics-server=true in "addons-800763"
	I1115 10:32:37.401489  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.404777  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.401496  587312 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-800763"
	I1115 10:32:37.405290  587312 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-800763"
	I1115 10:32:37.405313  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.405762  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.401550  587312 addons.go:70] Setting default-storageclass=true in profile "addons-800763"
	I1115 10:32:37.418152  587312 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-800763"
	I1115 10:32:37.401554  587312 addons.go:70] Setting gcp-auth=true in profile "addons-800763"
	I1115 10:32:37.418896  587312 mustload.go:66] Loading cluster: addons-800763
	I1115 10:32:37.401557  587312 addons.go:70] Setting ingress=true in profile "addons-800763"
	I1115 10:32:37.421792  587312 addons.go:239] Setting addon ingress=true in "addons-800763"
	I1115 10:32:37.421873  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.422371  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.422972  587312 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:32:37.423292  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.401561  587312 addons.go:70] Setting ingress-dns=true in profile "addons-800763"
	I1115 10:32:37.401593  587312 out.go:179] * Verifying Kubernetes components...
	I1115 10:32:37.401600  587312 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-800763"
	I1115 10:32:37.401612  587312 addons.go:70] Setting registry=true in profile "addons-800763"
	I1115 10:32:37.401619  587312 addons.go:70] Setting registry-creds=true in profile "addons-800763"
	I1115 10:32:37.401624  587312 addons.go:70] Setting storage-provisioner=true in profile "addons-800763"
	I1115 10:32:37.401634  587312 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-800763"
	I1115 10:32:37.401645  587312 addons.go:70] Setting volcano=true in profile "addons-800763"
	I1115 10:32:37.401651  587312 addons.go:70] Setting volumesnapshots=true in profile "addons-800763"
	I1115 10:32:37.421380  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.421629  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.456283  587312 addons.go:239] Setting addon ingress-dns=true in "addons-800763"
	I1115 10:32:37.456409  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.456946  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.457211  587312 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-800763"
	I1115 10:32:37.461317  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.484636  587312 addons.go:239] Setting addon volcano=true in "addons-800763"
	I1115 10:32:37.484733  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.485249  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.500455  587312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:37.500733  587312 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-800763"
	I1115 10:32:37.500795  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.501365  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.519484  587312 addons.go:239] Setting addon volumesnapshots=true in "addons-800763"
	I1115 10:32:37.519590  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.520192  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.538847  587312 addons.go:239] Setting addon registry=true in "addons-800763"
	I1115 10:32:37.539255  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.539755  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.566426  587312 addons.go:239] Setting addon registry-creds=true in "addons-800763"
	I1115 10:32:37.566554  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.573178  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.608971  587312 addons.go:239] Setting addon storage-provisioner=true in "addons-800763"
	I1115 10:32:37.609070  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.609576  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.618249  587312 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 10:32:37.622487  587312 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 10:32:37.622517  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 10:32:37.622582  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.634442  587312 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 10:32:37.640597  587312 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 10:32:37.640625  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 10:32:37.640693  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.664476  587312 addons.go:239] Setting addon default-storageclass=true in "addons-800763"
	I1115 10:32:37.664516  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.666689  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.667002  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.668347  587312 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 10:32:37.695606  587312 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 10:32:37.705584  587312 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 10:32:37.705648  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 10:32:37.705732  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.711868  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 10:32:37.715675  587312 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-800763"
	I1115 10:32:37.715716  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:37.716118  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:37.720072  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 10:32:37.724200  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 10:32:37.724225  587312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 10:32:37.724285  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.746905  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 10:32:37.751243  587312 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 10:32:37.757931  587312 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 10:32:37.757955  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 10:32:37.758026  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	W1115 10:32:37.765804  587312 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 10:32:37.794184  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 10:32:37.796074  587312 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 10:32:37.820041  587312 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 10:32:37.820106  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 10:32:37.820223  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.846928  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 10:32:37.853265  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 10:32:37.881966  587312 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 10:32:37.885483  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 10:32:37.885548  587312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 10:32:37.885644  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.885675  587312 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 10:32:37.894808  587312 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 10:32:37.894886  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 10:32:37.894978  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.885786  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.885960  587312 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 10:32:37.916202  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 10:32:37.916280  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.885966  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 10:32:37.885986  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.886048  587312 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:37.917521  587312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:32:37.917581  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.928991  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 10:32:37.929015  587312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 10:32:37.929087  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.941235  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 10:32:37.947717  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 10:32:37.952989  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 10:32:37.956012  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 10:32:37.957872  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.958198  587312 out.go:179]   - Using image docker.io/registry:3.0.0
	I1115 10:32:37.958748  587312 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:32:37.959415  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.965288  587312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 10:32:37.965321  587312 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 10:32:37.965534  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:37.968221  587312 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 10:32:37.965607  587312 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:37.966517  587312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:32:37.969697  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:32:37.969767  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.970346  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 10:32:37.970362  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 10:32:37.970414  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:37.981421  587312 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 10:32:37.981448  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 10:32:37.981510  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:38.002465  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.005830  587312 out.go:179]   - Using image docker.io/busybox:stable
	I1115 10:32:38.009746  587312 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 10:32:38.009776  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 10:32:38.009854  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:38.071450  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.085270  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.090511  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.129153  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.140126  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.152088  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.158603  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.165482  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:38.166365  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	W1115 10:32:38.168420  587312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 10:32:38.168460  587312 retry.go:31] will retry after 144.631039ms: ssh: handshake failed: EOF
	I1115 10:32:38.273510  587312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:38.594283  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 10:32:38.594351  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 10:32:38.635470  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:38.672203  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 10:32:38.672273  587312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 10:32:38.683919  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:38.797699  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 10:32:38.808618  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 10:32:38.808694  587312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 10:32:38.816292  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 10:32:38.832851  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 10:32:38.836071  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 10:32:38.838625  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 10:32:38.878464  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 10:32:38.880834  587312 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 10:32:38.880894  587312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 10:32:38.917565  587312 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 10:32:38.917640  587312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 10:32:38.945607  587312 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 10:32:38.945681  587312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 10:32:38.980419  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 10:32:38.980493  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 10:32:39.105169  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 10:32:39.106766  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 10:32:39.106835  587312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 10:32:39.109054  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 10:32:39.114348  587312 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 10:32:39.114426  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 10:32:39.146366  587312 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 10:32:39.146440  587312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 10:32:39.164219  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 10:32:39.184092  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 10:32:39.184168  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 10:32:39.241349  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 10:32:39.241433  587312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 10:32:39.292463  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 10:32:39.292538  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 10:32:39.317893  587312 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 10:32:39.317972  587312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 10:32:39.318460  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 10:32:39.446237  587312 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 10:32:39.446313  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 10:32:39.451395  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 10:32:39.451472  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 10:32:39.476629  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 10:32:39.476708  587312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 10:32:39.590961  587312 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 10:32:39.591033  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 10:32:39.705578  587312 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 10:32:39.705601  587312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 10:32:39.725668  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 10:32:39.752575  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 10:32:39.889049  587312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.919524781s)
	I1115 10:32:39.889130  587312 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 10:32:39.890272  587312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.616733924s)
	I1115 10:32:39.891216  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255720846s)
	I1115 10:32:39.891168  587312 node_ready.go:35] waiting up to 6m0s for node "addons-800763" to be "Ready" ...
	I1115 10:32:39.984619  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 10:32:39.984693  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 10:32:40.177026  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 10:32:40.177055  587312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 10:32:40.398540  587312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-800763" context rescaled to 1 replicas
	I1115 10:32:40.451207  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 10:32:40.451233  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 10:32:40.700076  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 10:32:40.700100  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 10:32:40.944179  587312 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 10:32:40.944206  587312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 10:32:41.111216  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1115 10:32:41.910389  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:42.150163  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.466155631s)
	I1115 10:32:42.890475  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.092734485s)
	I1115 10:32:42.890536  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.074181992s)
	I1115 10:32:43.550016  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.717010247s)
	I1115 10:32:43.550050  587312 addons.go:480] Verifying addon ingress=true in "addons-800763"
	I1115 10:32:43.550210  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.714117536s)
	I1115 10:32:43.550276  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.711632698s)
	I1115 10:32:43.550311  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.671827092s)
	I1115 10:32:43.550348  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.445119592s)
	I1115 10:32:43.550396  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.441280046s)
	I1115 10:32:43.550535  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.386243169s)
	I1115 10:32:43.550550  587312 addons.go:480] Verifying addon metrics-server=true in "addons-800763"
	I1115 10:32:43.550583  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.232072014s)
	I1115 10:32:43.550595  587312 addons.go:480] Verifying addon registry=true in "addons-800763"
	I1115 10:32:43.550793  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.825058516s)
	I1115 10:32:43.554410  587312 out.go:179] * Verifying registry addon...
	I1115 10:32:43.554489  587312 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-800763 service yakd-dashboard -n yakd-dashboard
	
	I1115 10:32:43.554517  587312 out.go:179] * Verifying ingress addon...
	I1115 10:32:43.558852  587312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 10:32:43.559699  587312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 10:32:43.573016  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.820347961s)
	W1115 10:32:43.573063  587312 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 10:32:43.573083  587312 retry.go:31] will retry after 251.771026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 10:32:43.575048  587312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 10:32:43.575072  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:43.576432  587312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 10:32:43.576455  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:43.825723  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 10:32:44.042817  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.931553462s)
	I1115 10:32:44.042853  587312 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-800763"
	I1115 10:32:44.046009  587312 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 10:32:44.049601  587312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 10:32:44.073115  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:44.073257  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:44.074038  587312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 10:32:44.074062  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 10:32:44.395308  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:44.553237  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:44.561862  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:44.563385  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:45.055457  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:45.071447  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:45.071826  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:45.339020  587312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 10:32:45.339298  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:45.358510  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:45.470241  587312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 10:32:45.483856  587312 addons.go:239] Setting addon gcp-auth=true in "addons-800763"
	I1115 10:32:45.483906  587312 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:32:45.484358  587312 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:32:45.501659  587312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 10:32:45.501715  587312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:32:45.521164  587312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:32:45.553568  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:45.563049  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:45.563573  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:46.052604  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:46.062576  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:46.063662  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:46.553684  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:46.563892  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:46.565091  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:46.657461  587312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.155768557s)
	I1115 10:32:46.657712  587312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.831654278s)
	I1115 10:32:46.660916  587312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 10:32:46.663956  587312 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 10:32:46.666913  587312 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 10:32:46.666938  587312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 10:32:46.680447  587312 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 10:32:46.680514  587312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 10:32:46.693887  587312 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 10:32:46.693909  587312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 10:32:46.707473  587312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1115 10:32:46.895216  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:47.055022  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:47.139818  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:47.140492  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:47.217773  587312 addons.go:480] Verifying addon gcp-auth=true in "addons-800763"
	I1115 10:32:47.220814  587312 out.go:179] * Verifying gcp-auth addon...
	I1115 10:32:47.224525  587312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 10:32:47.229776  587312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 10:32:47.229841  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:47.553719  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:47.563082  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:47.563217  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:47.728332  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:48.055913  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:48.062880  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:48.063263  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:48.228255  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:48.552882  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:48.561730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:48.563612  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:48.727688  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:49.053381  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:49.062858  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:49.063243  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:49.228384  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:49.394148  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:49.552976  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:49.561891  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:49.562829  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:49.727439  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:50.053714  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:50.063046  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:50.063126  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:50.228798  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:50.552542  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:50.562595  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:50.562764  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:50.727687  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:51.052995  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:51.061973  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:51.065366  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:51.227464  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:51.394511  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:51.553553  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:51.563063  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:51.563147  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:51.728236  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:52.053334  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:52.062305  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:52.062504  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:52.228685  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:52.552965  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:52.561616  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:52.563293  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:52.728396  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:53.053393  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:53.062402  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:53.063458  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:53.227571  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:53.394791  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:53.552803  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:53.562527  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:53.562586  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:53.727406  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:54.053748  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:54.062651  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:54.062998  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:54.227815  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:54.553359  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:54.561815  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:54.562830  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:54.729193  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:55.053856  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:55.062510  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:55.064237  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:55.227946  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:55.552655  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:55.562695  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:55.563139  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:55.727981  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:55.895531  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:56.053688  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:56.062825  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:56.063051  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:56.228045  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:56.552726  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:56.564256  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:56.564349  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:56.728699  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:57.053714  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:57.062382  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:57.062681  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:57.227327  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:57.553183  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:57.562103  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:57.563293  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:57.728317  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:58.053016  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:58.062362  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:58.062928  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:58.227735  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:32:58.394442  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:32:58.553253  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:58.562211  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:58.563487  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:58.727650  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:59.053087  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:59.061734  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:59.063008  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:59.227620  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:32:59.553067  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:32:59.561279  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:32:59.562619  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:32:59.728677  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:00.110953  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:00.112774  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:00.134869  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:00.234331  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:00.395931  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:00.552978  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:00.563389  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:00.563799  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:00.727454  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:01.053172  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:01.062150  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:01.063327  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:01.228450  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:01.553638  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:01.563030  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:01.563094  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:01.728305  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:02.053537  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:02.062645  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:02.062723  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:02.227986  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:02.554457  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:02.563244  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:02.563396  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:02.727473  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:02.894606  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:03.053086  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:03.063187  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:03.063611  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:03.227647  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:03.553868  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:03.561758  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:03.563769  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:03.727463  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:04.052740  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:04.063171  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:04.063387  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:04.227948  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:04.552656  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:04.562801  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:04.563002  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:04.728319  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:05.053806  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:05.062812  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:05.063211  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:05.228257  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:05.395069  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:05.552935  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:05.562895  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:05.562978  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:05.728182  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:06.053624  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:06.062163  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:06.063199  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:06.230684  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:06.552355  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:06.562531  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:06.562823  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:06.728112  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:07.053319  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:07.062805  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:07.063151  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:07.228102  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:07.395437  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:07.552452  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:07.562138  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:07.563322  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:07.727483  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:08.053576  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:08.062918  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:08.063258  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:08.228257  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:08.552897  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:08.562748  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:08.562946  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:08.727663  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:09.053516  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:09.062211  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:09.063185  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:09.227942  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:09.552662  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:09.564049  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:09.564775  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:09.727714  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:09.894884  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:10.053474  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:10.062983  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:10.063179  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:10.227878  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:10.553170  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:10.562141  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:10.564273  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:10.728202  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:11.053809  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:11.062548  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:11.062752  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:11.227567  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:11.553653  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:11.562931  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:11.563101  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:11.728664  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:12.053453  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:12.063090  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:12.063358  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:12.228192  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:12.394184  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:12.553444  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:12.562510  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:12.562645  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:12.727542  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:13.053317  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:13.062264  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:13.063818  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:13.227836  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:13.557173  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:13.562516  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:13.563332  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:13.728522  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:14.053901  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:14.062241  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:14.062853  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:14.227907  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:14.395633  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:14.552385  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:14.562687  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:14.562965  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:14.727664  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:15.055241  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:15.063398  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:15.063472  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:15.228368  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:15.553351  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:15.562514  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:15.562250  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:15.727476  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:16.053608  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:16.063399  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:16.063880  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:16.228024  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:16.552739  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:16.562841  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:16.562989  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:16.727975  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 10:33:16.894801  587312 node_ready.go:57] node "addons-800763" has "Ready":"False" status (will retry)
	I1115 10:33:17.053076  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:17.062943  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:17.063122  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:17.227719  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:17.552698  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:17.562896  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:17.563021  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:17.727828  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:18.053460  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:18.062878  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:18.063051  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:18.227973  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:18.552773  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:18.563310  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:18.563577  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:18.728322  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:18.914234  587312 node_ready.go:49] node "addons-800763" is "Ready"
	I1115 10:33:18.914266  587312 node_ready.go:38] duration metric: took 39.022826814s for node "addons-800763" to be "Ready" ...
	I1115 10:33:18.914281  587312 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:33:18.914360  587312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:33:18.931291  587312 api_server.go:72] duration metric: took 41.537046599s to wait for apiserver process to appear ...
	I1115 10:33:18.931312  587312 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:33:18.931331  587312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 10:33:18.953315  587312 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 10:33:18.955607  587312 api_server.go:141] control plane version: v1.34.1
	I1115 10:33:18.955639  587312 api_server.go:131] duration metric: took 24.31901ms to wait for apiserver health ...
	I1115 10:33:18.955649  587312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:33:18.971288  587312 system_pods.go:59] 19 kube-system pods found
	I1115 10:33:18.971325  587312 system_pods.go:61] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending
	I1115 10:33:18.971332  587312 system_pods.go:61] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:18.971337  587312 system_pods.go:61] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:18.971405  587312 system_pods.go:61] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending
	I1115 10:33:18.971417  587312 system_pods.go:61] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:18.971422  587312 system_pods.go:61] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:18.971442  587312 system_pods.go:61] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:18.971453  587312 system_pods.go:61] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:18.971470  587312 system_pods.go:61] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending
	I1115 10:33:18.971483  587312 system_pods.go:61] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:18.971490  587312 system_pods.go:61] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:18.971499  587312 system_pods.go:61] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:18.971511  587312 system_pods.go:61] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending
	I1115 10:33:18.971519  587312 system_pods.go:61] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending
	I1115 10:33:18.971527  587312 system_pods.go:61] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending
	I1115 10:33:18.971536  587312 system_pods.go:61] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending
	I1115 10:33:18.971567  587312 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending
	I1115 10:33:18.971573  587312 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending
	I1115 10:33:18.971578  587312 system_pods.go:61] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending
	I1115 10:33:18.971597  587312 system_pods.go:74] duration metric: took 15.941548ms to wait for pod list to return data ...
	I1115 10:33:18.971614  587312 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:33:18.985943  587312 default_sa.go:45] found service account: "default"
	I1115 10:33:18.986020  587312 default_sa.go:55] duration metric: took 14.397777ms for default service account to be created ...
	I1115 10:33:18.986044  587312 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:33:18.998325  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:18.998404  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending
	I1115 10:33:18.998425  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:18.998444  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:18.998477  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending
	I1115 10:33:18.998500  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:18.998520  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:18.998539  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:18.998574  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:18.998592  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending
	I1115 10:33:18.998612  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:18.998645  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:18.998672  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:18.998693  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending
	I1115 10:33:18.998727  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending
	I1115 10:33:18.998753  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:18.998771  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending
	I1115 10:33:18.998789  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending
	I1115 10:33:18.998823  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending
	I1115 10:33:18.998840  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending
	I1115 10:33:18.998882  587312 retry.go:31] will retry after 287.447039ms: missing components: kube-dns
	I1115 10:33:19.072981  587312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 10:33:19.073053  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:19.073666  587312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 10:33:19.073726  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:19.073828  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:19.291034  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:19.298229  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:19.298311  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:19.298334  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:19.298354  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:19.298386  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending
	I1115 10:33:19.298410  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:19.298429  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:19.298464  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:19.298488  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:19.298506  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending
	I1115 10:33:19.298525  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:19.298556  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:19.298582  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:19.298600  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending
	I1115 10:33:19.298637  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending
	I1115 10:33:19.298663  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:19.298682  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending
	I1115 10:33:19.298721  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.298748  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.298768  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending
	I1115 10:33:19.298812  587312 retry.go:31] will retry after 246.929858ms: missing components: kube-dns
	I1115 10:33:19.566013  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:19.566050  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:19.566058  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending
	I1115 10:33:19.566063  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending
	I1115 10:33:19.566071  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 10:33:19.566076  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:19.566081  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:19.566086  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:19.566091  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:19.566101  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 10:33:19.566105  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:19.566113  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:19.566118  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:19.566132  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 10:33:19.566138  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 10:33:19.566145  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:19.566155  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 10:33:19.566167  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.566178  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.566185  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:19.566199  587312 retry.go:31] will retry after 423.660097ms: missing components: kube-dns
	I1115 10:33:19.568262  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:19.573692  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:19.574272  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:19.731048  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:19.996506  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:19.996544  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:33:19.996553  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 10:33:19.996561  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 10:33:19.996568  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 10:33:19.996572  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:19.996578  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:19.996586  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:19.996590  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:19.996599  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 10:33:19.996603  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:19.996614  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:19.996623  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:19.996636  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 10:33:19.996642  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 10:33:19.996652  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:19.996658  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 10:33:19.996664  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.996671  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:19.996679  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:33:19.996698  587312 retry.go:31] will retry after 464.672682ms: missing components: kube-dns
	I1115 10:33:20.059901  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:20.102602  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:20.102787  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:20.227999  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:20.466503  587312 system_pods.go:86] 19 kube-system pods found
	I1115 10:33:20.466539  587312 system_pods.go:89] "coredns-66bc5c9577-b4lj6" [0ee1c332-a1ab-4604-aad1-214952a53d07] Running
	I1115 10:33:20.466548  587312 system_pods.go:89] "csi-hostpath-attacher-0" [de7b7f73-61b6-4a36-81d1-37e603004b87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 10:33:20.466555  587312 system_pods.go:89] "csi-hostpath-resizer-0" [2c7b9981-8cbf-4a2b-9574-466c8e994e01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 10:33:20.466564  587312 system_pods.go:89] "csi-hostpathplugin-b4dh9" [55fd9315-cb8f-42e6-97a0-fde619910c0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 10:33:20.466571  587312 system_pods.go:89] "etcd-addons-800763" [2c8e5b83-56c1-46fe-8cc2-39c23a8e008d] Running
	I1115 10:33:20.466576  587312 system_pods.go:89] "kindnet-blpd7" [c0b223fb-ecbc-4d00-a17a-40274c700c52] Running
	I1115 10:33:20.466581  587312 system_pods.go:89] "kube-apiserver-addons-800763" [62b84128-b828-4908-8b16-91e9476240ce] Running
	I1115 10:33:20.466585  587312 system_pods.go:89] "kube-controller-manager-addons-800763" [9e722ddb-a0ab-4aba-ba82-5c0bdf11860c] Running
	I1115 10:33:20.466591  587312 system_pods.go:89] "kube-ingress-dns-minikube" [2dba2ab5-4914-478f-9c10-795dbab5f3af] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 10:33:20.466598  587312 system_pods.go:89] "kube-proxy-pg4bh" [43dc5f94-c11b-496a-ae5d-99234a4deef4] Running
	I1115 10:33:20.466603  587312 system_pods.go:89] "kube-scheduler-addons-800763" [08d5b3fb-0f2f-4918-89e8-894a4e0e9c1d] Running
	I1115 10:33:20.466609  587312 system_pods.go:89] "metrics-server-85b7d694d7-prnnw" [e4bf858d-9ccf-4498-8768-bc438175359c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 10:33:20.466622  587312 system_pods.go:89] "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 10:33:20.466628  587312 system_pods.go:89] "registry-6b586f9694-snxbp" [3a35acf9-5d71-44e2-ada7-fdace707ff15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 10:33:20.466640  587312 system_pods.go:89] "registry-creds-764b6fb674-66shb" [ee5928cb-0522-4d75-86c9-719f510099ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 10:33:20.466647  587312 system_pods.go:89] "registry-proxy-frc5j" [e0a7d020-9896-4589-a8bc-55ad79130028] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 10:33:20.466660  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtn6d" [b3f3e5e9-84bb-4850-a904-1d5a2c83b360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:20.466667  587312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s9tcg" [fdaace29-7975-400f-8eab-881c79905faf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 10:33:20.466671  587312 system_pods.go:89] "storage-provisioner" [a90fe012-a522-4f37-af5b-6658b6b6e0d9] Running
	I1115 10:33:20.466683  587312 system_pods.go:126] duration metric: took 1.480584372s to wait for k8s-apps to be running ...
	I1115 10:33:20.466694  587312 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:33:20.466751  587312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:20.484125  587312 system_svc.go:56] duration metric: took 17.420976ms WaitForService to wait for kubelet
	I1115 10:33:20.484156  587312 kubeadm.go:587] duration metric: took 43.089915241s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:33:20.484185  587312 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:33:20.487211  587312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:33:20.487246  587312 node_conditions.go:123] node cpu capacity is 2
	I1115 10:33:20.487261  587312 node_conditions.go:105] duration metric: took 3.065514ms to run NodePressure ...
	I1115 10:33:20.487274  587312 start.go:242] waiting for startup goroutines ...
	I1115 10:33:20.553326  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:20.563462  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:20.563640  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:20.728121  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:21.054231  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:21.062803  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:21.064645  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:21.227819  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:21.553464  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:21.562451  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:21.563152  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:21.731530  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:22.054028  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:22.063209  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:22.065344  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:22.228149  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:22.554517  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:22.564318  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:22.564890  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:22.757900  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:23.054156  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:23.061781  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:23.063456  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:23.228547  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:23.552982  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:23.562058  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:23.564442  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:23.727997  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:24.053572  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:24.062114  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:24.064587  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:24.228216  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:24.554170  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:24.564277  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:24.564732  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:24.728097  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:25.053674  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:25.064454  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:25.064908  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:25.228309  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:25.553219  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:25.564232  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:25.564658  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:25.727696  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:26.054101  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:26.063241  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:26.063410  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:26.228568  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:26.552821  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:26.562616  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:26.563837  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:26.728100  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:27.053674  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:27.064612  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:27.065017  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:27.235883  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:27.553775  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:27.561531  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:27.564631  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:27.727598  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:28.052973  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:28.062359  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:28.063673  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:28.227910  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:28.553785  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:28.566994  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:28.567383  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:28.728691  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:29.054938  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:29.069996  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:29.070507  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:29.228176  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:29.555358  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:29.566542  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:29.567075  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:29.731425  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:30.099731  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:30.099891  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:30.101842  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:30.230401  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:30.552952  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:30.562763  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:30.564782  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:30.727802  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:31.057088  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:31.157803  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:31.158222  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:31.266563  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:31.554464  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:31.563127  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:31.563207  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:31.728423  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:32.053686  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:32.066021  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:32.066676  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:32.227694  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:32.554284  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:32.563966  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:32.564170  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:32.728341  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:33.053478  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:33.063641  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:33.064357  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:33.230356  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:33.552823  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:33.562808  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:33.565111  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:33.728311  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:34.053671  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:34.063581  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:34.064201  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:34.231590  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:34.553384  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:34.563171  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:34.563311  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:34.730914  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:35.054078  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:35.064195  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:35.064608  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:35.229140  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:35.554036  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:35.564518  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:35.564952  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:35.728212  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:36.052461  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:36.063867  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:36.063972  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:36.228312  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:36.552952  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:36.561817  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:36.564246  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:36.728473  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:37.054174  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:37.065043  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:37.067791  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:37.239766  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:37.554262  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:37.564464  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:37.565066  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:37.728411  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:38.053823  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:38.065271  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:38.065692  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:38.302307  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:38.553559  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:38.563269  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:38.564339  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:38.728289  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:39.053608  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:39.062926  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:39.064212  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:39.227988  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:39.553806  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:39.561956  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:39.563681  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:39.730196  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:40.057049  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:40.073559  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:40.074044  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:40.235027  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:40.554292  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:40.564587  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:40.565046  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:40.728605  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:41.053571  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:41.062932  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:41.063284  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:41.229955  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:41.554364  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:41.564339  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:41.564661  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:41.727689  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:42.053609  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:42.065007  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:42.065919  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:42.228615  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:42.554190  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:42.564321  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:42.564748  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:42.727684  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:43.053493  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:43.063495  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:43.064830  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:43.228045  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:43.553953  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:43.563313  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:43.563772  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:43.727654  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:44.053442  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:44.064059  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:44.064596  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:44.227457  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:44.553644  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:44.563761  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:44.564073  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:44.728811  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:45.066911  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:45.082737  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:45.116725  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:45.243730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:45.553667  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:45.563033  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:45.564242  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:45.728583  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:46.053958  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:46.063237  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:46.063517  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:46.228162  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:46.553852  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:46.561611  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:46.563415  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:46.730507  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:47.053871  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:47.062796  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:47.062901  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:47.228232  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:47.553929  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:47.563459  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:47.563887  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:47.728114  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:48.054139  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:48.064255  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:48.064597  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:48.227615  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:48.553871  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:48.563334  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:48.563513  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:48.728631  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:49.053013  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:49.062009  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:49.063664  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:49.227844  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:49.554439  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:49.563806  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:49.564249  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:49.727635  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:50.054473  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:50.064040  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:50.067343  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:50.228275  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:50.554212  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:50.564646  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:50.565134  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:50.729703  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:51.053839  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:51.064836  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:51.065351  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:51.231027  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:51.561142  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:51.564312  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:51.565718  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:51.728224  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:52.053693  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:52.063548  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:52.063611  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:52.229980  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:52.554287  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:52.564007  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:52.564265  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:52.728228  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:53.054309  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:53.062820  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:53.063372  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:53.227668  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:53.553221  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:53.561799  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:53.563865  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:53.727873  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:54.053986  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:54.062130  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:54.063373  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:54.227891  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:54.553730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:54.562184  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:54.563876  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:54.727804  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:55.054174  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:55.063455  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:55.063561  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:55.228082  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:55.553642  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:55.562948  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:55.563073  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:55.728495  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:56.053958  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:56.062226  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:56.063662  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:56.228483  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:56.554537  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:56.564085  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:56.564524  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:56.727972  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:57.054211  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:57.063364  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:57.065157  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:57.228479  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:57.553630  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:57.563480  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:57.563672  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:57.727906  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:58.053305  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:58.070371  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 10:33:58.076624  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:58.227479  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:58.558570  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:58.563803  587312 kapi.go:107] duration metric: took 1m15.004947568s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 10:33:58.564186  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:58.728413  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:59.053936  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:59.062812  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:59.228058  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:33:59.553622  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:33:59.563760  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:33:59.727930  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:00.057207  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:00.072979  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:00.247986  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:00.554663  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:00.563644  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:00.727695  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:01.053799  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:01.063062  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:01.228835  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:01.553733  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:01.562967  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:01.730741  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:02.053589  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:02.063685  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:02.230105  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:02.553403  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:02.563630  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:02.727862  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:03.054005  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:03.063581  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:03.228048  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:03.554651  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:03.563153  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:03.729206  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:04.054779  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:04.064823  587312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 10:34:04.230163  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:04.554329  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:04.563360  587312 kapi.go:107] duration metric: took 1m21.003658667s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 10:34:04.728953  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:05.054438  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:05.229851  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:05.588691  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:05.728578  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:06.053790  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:06.227541  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:06.553246  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:06.728275  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:07.052907  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:07.228308  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:07.557839  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:07.729298  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:08.053688  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:08.227730  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:08.553502  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:08.728059  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:09.056651  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:09.229009  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:09.554374  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:09.728788  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:10.053745  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:10.229221  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:10.554539  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:10.728670  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:11.053511  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:11.228146  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 10:34:11.554766  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:11.728590  587312 kapi.go:107] duration metric: took 1m24.504064309s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 10:34:11.732140  587312 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-800763 cluster.
	I1115 10:34:11.735195  587312 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 10:34:11.738255  587312 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 10:34:12.053420  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:12.554091  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:13.054330  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:13.553270  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:14.053402  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:14.553572  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:15.061506  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:15.553360  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:16.053776  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:16.571624  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:17.064309  587312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 10:34:17.564393  587312 kapi.go:107] duration metric: took 1m33.514791144s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 10:34:17.605876  587312 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1115 10:34:17.618650  587312 addons.go:515] duration metric: took 1m40.224083035s for enable addons: enabled=[default-storageclass storage-provisioner inspektor-gadget amd-gpu-device-plugin cloud-spanner ingress-dns nvidia-device-plugin registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1115 10:34:17.618710  587312 start.go:247] waiting for cluster config update ...
	I1115 10:34:17.618734  587312 start.go:256] writing updated cluster config ...
	I1115 10:34:17.620311  587312 ssh_runner.go:195] Run: rm -f paused
	I1115 10:34:17.629425  587312 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:17.635084  587312 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b4lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.640307  587312 pod_ready.go:94] pod "coredns-66bc5c9577-b4lj6" is "Ready"
	I1115 10:34:17.640376  587312 pod_ready.go:86] duration metric: took 5.219024ms for pod "coredns-66bc5c9577-b4lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.642977  587312 pod_ready.go:83] waiting for pod "etcd-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.652700  587312 pod_ready.go:94] pod "etcd-addons-800763" is "Ready"
	I1115 10:34:17.652775  587312 pod_ready.go:86] duration metric: took 9.734245ms for pod "etcd-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.672118  587312 pod_ready.go:83] waiting for pod "kube-apiserver-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.677660  587312 pod_ready.go:94] pod "kube-apiserver-addons-800763" is "Ready"
	I1115 10:34:17.677737  587312 pod_ready.go:86] duration metric: took 5.549949ms for pod "kube-apiserver-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:17.680811  587312 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.034405  587312 pod_ready.go:94] pod "kube-controller-manager-addons-800763" is "Ready"
	I1115 10:34:18.034491  587312 pod_ready.go:86] duration metric: took 353.580808ms for pod "kube-controller-manager-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.233970  587312 pod_ready.go:83] waiting for pod "kube-proxy-pg4bh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.633299  587312 pod_ready.go:94] pod "kube-proxy-pg4bh" is "Ready"
	I1115 10:34:18.633368  587312 pod_ready.go:86] duration metric: took 399.370522ms for pod "kube-proxy-pg4bh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:18.834347  587312 pod_ready.go:83] waiting for pod "kube-scheduler-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.233708  587312 pod_ready.go:94] pod "kube-scheduler-addons-800763" is "Ready"
	I1115 10:34:19.233745  587312 pod_ready.go:86] duration metric: took 399.327617ms for pod "kube-scheduler-addons-800763" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:19.233763  587312 pod_ready.go:40] duration metric: took 1.604307989s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:19.292084  587312 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:34:19.296093  587312 out.go:179] * Done! kubectl is now configured to use "addons-800763" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.365935907Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0661af5accbd067f59ba7cdb547881040f90a95a45b5cf655cf71987cc8d3382 UID:4586036d-8a43-480f-b6fc-9fa267e5a0d7 NetNS:/var/run/netns/5c48a2f9-8224-4901-9fba-fed64f4ec484 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001482b38}] Aliases:map[]}"
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.366095884Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.369956689Z" level=info msg="Ran pod sandbox 0661af5accbd067f59ba7cdb547881040f90a95a45b5cf655cf71987cc8d3382 with infra container: default/busybox/POD" id=270a5b45-5c82-4dd2-ab98-e85dfc6eea76 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.371015992Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11e9f41f-c263-4b75-93b1-f0166ac17778 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.371134902Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=11e9f41f-c263-4b75-93b1-f0166ac17778 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.371186365Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=11e9f41f-c263-4b75-93b1-f0166ac17778 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.3730437Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a5023cfa-efc2-409d-b133-72a59dad270d name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:20 addons-800763 crio[828]: time="2025-11-15T10:34:20.374987116Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.498286359Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a5023cfa-efc2-409d-b133-72a59dad270d name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.499587232Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bb77ea01-288c-480c-81f9-4a890157726f name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.501861138Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=767c1824-3e10-4306-aa34-c6ceacde5702 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.507809393Z" level=info msg="Creating container: default/busybox/busybox" id=69a426e4-a63a-48a2-a003-ad4d304b05b9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.508086008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.520452539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.521225373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.53824987Z" level=info msg="Created container b1cb9e8cb1d134878da767afb5fd02fd13174f809faf97194fa4cc8d90319e4a: default/busybox/busybox" id=69a426e4-a63a-48a2-a003-ad4d304b05b9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.53927737Z" level=info msg="Starting container: b1cb9e8cb1d134878da767afb5fd02fd13174f809faf97194fa4cc8d90319e4a" id=4355445c-af4c-440c-8526-1440bb0547f0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:34:22 addons-800763 crio[828]: time="2025-11-15T10:34:22.541344529Z" level=info msg="Started container" PID=4985 containerID=b1cb9e8cb1d134878da767afb5fd02fd13174f809faf97194fa4cc8d90319e4a description=default/busybox/busybox id=4355445c-af4c-440c-8526-1440bb0547f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0661af5accbd067f59ba7cdb547881040f90a95a45b5cf655cf71987cc8d3382
	Nov 15 10:34:31 addons-800763 crio[828]: time="2025-11-15T10:34:31.779471992Z" level=info msg="Removing container: 8e56e2e5b14ffdebc3987bbf3fec0b7c4633192c0c395ec5f04ca5eb20de86b5" id=508b38a5-f632-4c97-bdc9-81d32ad49d09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:34:31 addons-800763 crio[828]: time="2025-11-15T10:34:31.782034952Z" level=info msg="Error loading conmon cgroup of container 8e56e2e5b14ffdebc3987bbf3fec0b7c4633192c0c395ec5f04ca5eb20de86b5: cgroup deleted" id=508b38a5-f632-4c97-bdc9-81d32ad49d09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:34:31 addons-800763 crio[828]: time="2025-11-15T10:34:31.792689126Z" level=info msg="Removed container 8e56e2e5b14ffdebc3987bbf3fec0b7c4633192c0c395ec5f04ca5eb20de86b5: gcp-auth/gcp-auth-certs-create-zsd8b/create" id=508b38a5-f632-4c97-bdc9-81d32ad49d09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:34:31 addons-800763 crio[828]: time="2025-11-15T10:34:31.795902177Z" level=info msg="Stopping pod sandbox: 5c502f9a62e924fc213fc65ff07a01fa01bc11dfc458f0cfadf5cd53ea0ca9dc" id=959b7c81-a318-4f40-9666-9b205ca42094 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:34:31 addons-800763 crio[828]: time="2025-11-15T10:34:31.795970527Z" level=info msg="Stopped pod sandbox (already stopped): 5c502f9a62e924fc213fc65ff07a01fa01bc11dfc458f0cfadf5cd53ea0ca9dc" id=959b7c81-a318-4f40-9666-9b205ca42094 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:34:31 addons-800763 crio[828]: time="2025-11-15T10:34:31.796558948Z" level=info msg="Removing pod sandbox: 5c502f9a62e924fc213fc65ff07a01fa01bc11dfc458f0cfadf5cd53ea0ca9dc" id=fcd5be8c-8ce3-4b57-8ef9-c245b358bf76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 10:34:31 addons-800763 crio[828]: time="2025-11-15T10:34:31.802870071Z" level=info msg="Removed pod sandbox: 5c502f9a62e924fc213fc65ff07a01fa01bc11dfc458f0cfadf5cd53ea0ca9dc" id=fcd5be8c-8ce3-4b57-8ef9-c245b358bf76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	b1cb9e8cb1d13       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   0661af5accbd0       busybox                                    default
	a9042d386fbd2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	58a3d7ae99c21       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	cf788ff5ca9ed       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	b92e7600078c6       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	242745a7e2fe0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                20 seconds ago       Running             node-driver-registrar                    0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	6a7ca91f99713       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 21 seconds ago       Running             gcp-auth                                 0                   d4ba222fb6799       gcp-auth-78565c9fb4-st9nz                  gcp-auth
	40c1b4b58e2c3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            24 seconds ago       Running             gadget                                   0                   5855f10464733       gadget-xqsc5                               gadget
	825e5f7ffbf22       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             26 seconds ago       Exited              patch                                    3                   c2055679faa3c       gcp-auth-certs-patch-lfd45                 gcp-auth
	bdf22aea1b665       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             28 seconds ago       Running             controller                               0                   684aa11646346       ingress-nginx-controller-6c8bf45fb-krqbs   ingress-nginx
	eb2c40f0693a3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              34 seconds ago       Running             registry-proxy                           0                   5b09fb9d79f70       registry-proxy-frc5j                       kube-system
	b7438622a3867       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   38 seconds ago       Running             csi-external-health-monitor-controller   0                   cb0bc7f7e436c       csi-hostpathplugin-b4dh9                   kube-system
	a4dbbc00dd992       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     39 seconds ago       Running             nvidia-device-plugin-ctr                 0                   dc7c732bd16eb       nvidia-device-plugin-daemonset-hc67v       kube-system
	fcf398de5ed70       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             44 seconds ago       Running             csi-attacher                             0                   55ee16e3ed6d6       csi-hostpath-attacher-0                    kube-system
	63983c9057a45       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              45 seconds ago       Running             csi-resizer                              0                   848676623a853       csi-hostpath-resizer-0                     kube-system
	b45bc2cf489d6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   47 seconds ago       Exited              patch                                    0                   a2e98578048b2       ingress-nginx-admission-patch-cz8cl        ingress-nginx
	f1d40dd3d4b9b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               47 seconds ago       Running             cloud-spanner-emulator                   0                   bfe4e2b3832ac       cloud-spanner-emulator-6f9fcf858b-rkrsf    default
	0c61d36c7a511       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      52 seconds ago       Running             volume-snapshot-controller               0                   63d904695c088       snapshot-controller-7d9fbc56b8-dtn6d       kube-system
	3ca4db6a78c91       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               52 seconds ago       Running             minikube-ingress-dns                     0                   5196234c7f03f       kube-ingress-dns-minikube                  kube-system
	0c93957738cd9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   ec5b07abb4475       ingress-nginx-admission-create-9p9nh       ingress-nginx
	17bb62b0a1fd7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   d50e67db923b6       snapshot-controller-7d9fbc56b8-s9tcg       kube-system
	d8aa5125c640c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   1cb3d0792a284       metrics-server-85b7d694d7-prnnw            kube-system
	27c39a02e207f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   2ebb2c6548d34       local-path-provisioner-648f6765c9-hbdxm    local-path-storage
	50086071e8f3c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   f0ecc018e9534       yakd-dashboard-5ff678cb9-2phbk             yakd-dashboard
	a04a62ebc8233       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   b754a0cbe227d       registry-6b586f9694-snxbp                  kube-system
	4543ce964ae98       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   d87a7fe56fe87       storage-provisioner                        kube-system
	1709e904357b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   780c2b0afd478       coredns-66bc5c9577-b4lj6                   kube-system
	60697a03970e4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   252750f24ec16       kindnet-blpd7                              kube-system
	2cdb176563f24       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             About a minute ago   Running             kube-proxy                               0                   50698c6e5eba8       kube-proxy-pg4bh                           kube-system
	d105de9b64fd9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   e70ef115896f2       kube-apiserver-addons-800763               kube-system
	33e51fd6419d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   7d18e67da7671       kube-scheduler-addons-800763               kube-system
	7cfdcbc77bbdd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   08cef7e93c9dd       etcd-addons-800763                         kube-system
	7d2cf8e9b9a68       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   49581036023d5       kube-controller-manager-addons-800763      kube-system
	
	
	==> coredns [1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f] <==
	[INFO] 10.244.0.17:57801 - 37994 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00020934s
	[INFO] 10.244.0.17:57801 - 30906 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002018978s
	[INFO] 10.244.0.17:57801 - 39317 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002093038s
	[INFO] 10.244.0.17:57801 - 59765 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000437322s
	[INFO] 10.244.0.17:57801 - 43242 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000488022s
	[INFO] 10.244.0.17:44372 - 50193 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154488s
	[INFO] 10.244.0.17:44372 - 49988 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000253238s
	[INFO] 10.244.0.17:41539 - 43263 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118639s
	[INFO] 10.244.0.17:41539 - 42802 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000159346s
	[INFO] 10.244.0.17:54013 - 55553 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101737s
	[INFO] 10.244.0.17:54013 - 55373 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136158s
	[INFO] 10.244.0.17:48906 - 64736 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00134126s
	[INFO] 10.244.0.17:48906 - 64916 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001493818s
	[INFO] 10.244.0.17:50827 - 54860 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000126484s
	[INFO] 10.244.0.17:50827 - 54671 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173819s
	[INFO] 10.244.0.21:33288 - 27293 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00040675s
	[INFO] 10.244.0.21:57709 - 57858 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000259285s
	[INFO] 10.244.0.21:58593 - 58638 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147899s
	[INFO] 10.244.0.21:57486 - 23006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125212s
	[INFO] 10.244.0.21:58345 - 21233 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000159961s
	[INFO] 10.244.0.21:56598 - 54193 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015361s
	[INFO] 10.244.0.21:47737 - 24645 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002216757s
	[INFO] 10.244.0.21:37859 - 34830 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001974752s
	[INFO] 10.244.0.21:37185 - 60747 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002103811s
	[INFO] 10.244.0.21:40060 - 15453 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00227576s
	
	
	==> describe nodes <==
	Name:               addons-800763
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-800763
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=addons-800763
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_32_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-800763
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-800763"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-800763
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:34:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:34:03 +0000   Sat, 15 Nov 2025 10:32:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:34:03 +0000   Sat, 15 Nov 2025 10:32:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:34:03 +0000   Sat, 15 Nov 2025 10:32:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:34:03 +0000   Sat, 15 Nov 2025 10:33:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-800763
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                f6721dac-01aa-47dc-9bba-4ca8229436ed
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-6f9fcf858b-rkrsf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  gadget                      gadget-xqsc5                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  gcp-auth                    gcp-auth-78565c9fb4-st9nz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-krqbs    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         109s
	  kube-system                 coredns-66bc5c9577-b4lj6                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 csi-hostpathplugin-b4dh9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 etcd-addons-800763                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-blpd7                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-addons-800763                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-addons-800763       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-pg4bh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-addons-800763                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 metrics-server-85b7d694d7-prnnw             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         110s
	  kube-system                 nvidia-device-plugin-daemonset-hc67v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-6b586f9694-snxbp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 registry-creds-764b6fb674-66shb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-proxy-frc5j                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 snapshot-controller-7d9fbc56b8-dtn6d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 snapshot-controller-7d9fbc56b8-s9tcg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  local-path-storage          local-path-provisioner-648f6765c9-hbdxm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2phbk              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 114s  kube-proxy       
	  Normal   Starting                 2m1s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m1s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m1s  kubelet          Node addons-800763 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m1s  kubelet          Node addons-800763 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s  kubelet          Node addons-800763 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           116s  node-controller  Node addons-800763 event: Registered Node addons-800763 in Controller
	  Normal   NodeReady                74s   kubelet          Node addons-800763 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f] <==
	{"level":"warn","ts":"2025-11-15T10:32:27.590214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.616982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.660905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.701304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.736900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.781919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.812902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.849721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.889124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:27.930865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.001046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.015717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.054713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.070801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.183686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.209139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.221135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.249826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:28.352939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:44.276978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:32:44.295523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.361256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.369406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.390442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:33:06.406444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56552","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [6a7ca91f997133c906da4b013fa65949a3c035d9fda4015f3181d5667f3cf1ff] <==
	2025/11/15 10:34:10 GCP Auth Webhook started!
	2025/11/15 10:34:19 Ready to marshal response ...
	2025/11/15 10:34:19 Ready to write response ...
	2025/11/15 10:34:20 Ready to marshal response ...
	2025/11/15 10:34:20 Ready to write response ...
	2025/11/15 10:34:20 Ready to marshal response ...
	2025/11/15 10:34:20 Ready to write response ...
	
	
	==> kernel <==
	 10:34:32 up  2:17,  0 user,  load average: 2.34, 2.98, 3.42
	Linux addons-800763 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108] <==
	I1115 10:32:38.401714       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:32:38.402164       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:33:08.402238       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:33:08.402241       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:33:08.402340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:33:08.402428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:33:09.801904       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:33:09.801980       1 metrics.go:72] Registering metrics
	I1115 10:33:09.802099       1 controller.go:711] "Syncing nftables rules"
	I1115 10:33:18.406925       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:33:18.406987       1 main.go:301] handling current node
	I1115 10:33:28.401748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:33:28.401779       1 main.go:301] handling current node
	I1115 10:33:38.401475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:33:38.401515       1 main.go:301] handling current node
	I1115 10:33:48.402933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:33:48.402966       1 main.go:301] handling current node
	I1115 10:33:58.405271       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:33:58.405300       1 main.go:301] handling current node
	I1115 10:34:08.402331       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:34:08.402458       1 main.go:301] handling current node
	I1115 10:34:18.401467       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:34:18.401608       1 main.go:301] handling current node
	I1115 10:34:28.404420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:34:28.404540       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0] <==
	I1115 10:32:43.871461       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.101.240.80"}
	I1115 10:32:43.896064       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1115 10:32:43.993666       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.46.156"}
	W1115 10:32:44.272776       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 10:32:44.291952       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1115 10:32:47.079591       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.130.229"}
	W1115 10:33:06.355111       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 10:33:06.369426       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 10:33:06.390384       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1115 10:33:06.406068       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1115 10:33:18.794605       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.130.229:443: connect: connection refused
	E1115 10:33:18.794648       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.130.229:443: connect: connection refused" logger="UnhandledError"
	W1115 10:33:18.794826       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.130.229:443: connect: connection refused
	E1115 10:33:18.794906       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.130.229:443: connect: connection refused" logger="UnhandledError"
	W1115 10:33:18.899824       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.130.229:443: connect: connection refused
	E1115 10:33:18.899867       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.130.229:443: connect: connection refused" logger="UnhandledError"
	E1115 10:33:31.225009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.214.220:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.214.220:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.214.220:443: connect: connection refused" logger="UnhandledError"
	W1115 10:33:31.232207       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 10:33:31.235670       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 10:33:31.281789       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1115 10:33:31.287654       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 10:34:30.267754       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45434: use of closed network connection
	
	
	==> kube-controller-manager [7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15] <==
	I1115 10:32:36.343804       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:32:36.343834       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:32:36.355332       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-800763" podCIDRs=["10.244.0.0/24"]
	I1115 10:32:36.364547       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:32:36.369743       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:32:36.374289       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:32:36.375502       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:32:36.376094       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:32:36.376155       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:32:36.376681       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:32:36.376817       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:32:36.376691       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:32:36.377015       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:32:36.378847       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:32:36.379119       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:32:36.389154       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E1115 10:32:42.251039       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 10:33:06.348559       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 10:33:06.348713       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 10:33:06.348782       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 10:33:06.376939       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 10:33:06.381846       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 10:33:06.449300       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:33:06.482370       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:33:21.337396       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e] <==
	I1115 10:32:38.361253       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:32:38.462528       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:32:38.564311       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:32:38.564348       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 10:32:38.564439       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:32:38.655991       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:32:38.656056       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:32:38.678157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:32:38.678495       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:32:38.678518       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:32:38.704009       1 config.go:200] "Starting service config controller"
	I1115 10:32:38.704033       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:32:38.704153       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:32:38.704167       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:32:38.704583       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:32:38.704598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:32:38.709026       1 config.go:309] "Starting node config controller"
	I1115 10:32:38.709056       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:32:38.804290       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:32:38.804381       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:32:38.804696       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:32:38.833145       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44] <==
	E1115 10:32:29.386525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:32:29.386560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:32:29.393529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:32:29.393681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:32:29.394279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:32:29.394289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:32:29.394337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:32:29.394382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:32:29.394457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:32:29.394484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:32:29.394508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:32:29.394532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:32:29.394563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:32:29.394578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:32:30.303647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:32:30.305090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:32:30.321728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:32:30.345550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:32:30.468423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:32:30.492021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:32:30.499143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:32:30.506454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:32:30.519096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:32:30.809020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1115 10:32:33.273243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:33:54 addons-800763 kubelet[1288]: I1115 10:33:54.353457    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hc67v" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:33:58 addons-800763 kubelet[1288]: I1115 10:33:58.404611    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-frc5j" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:33:59 addons-800763 kubelet[1288]: I1115 10:33:59.413214    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-frc5j" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 10:34:04 addons-800763 kubelet[1288]: I1115 10:34:04.465651    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-frc5j" podStartSLOduration=8.863759996 podStartE2EDuration="46.465626097s" podCreationTimestamp="2025-11-15 10:33:18 +0000 UTC" firstStartedPulling="2025-11-15 10:33:19.889671707 +0000 UTC m=+48.289845414" lastFinishedPulling="2025-11-15 10:33:57.491537816 +0000 UTC m=+85.891711515" observedRunningTime="2025-11-15 10:33:58.448806482 +0000 UTC m=+86.848980222" watchObservedRunningTime="2025-11-15 10:34:04.465626097 +0000 UTC m=+92.865799804"
	Nov 15 10:34:04 addons-800763 kubelet[1288]: I1115 10:34:04.723435    1288 scope.go:117] "RemoveContainer" containerID="de1989bba2411754a61b929031e5f5845bed4da9598456556e11f8695c4901ef"
	Nov 15 10:34:06 addons-800763 kubelet[1288]: I1115 10:34:06.450559    1288 scope.go:117] "RemoveContainer" containerID="de1989bba2411754a61b929031e5f5845bed4da9598456556e11f8695c4901ef"
	Nov 15 10:34:06 addons-800763 kubelet[1288]: I1115 10:34:06.470268    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-krqbs" podStartSLOduration=59.027270775 podStartE2EDuration="1m23.470246771s" podCreationTimestamp="2025-11-15 10:32:43 +0000 UTC" firstStartedPulling="2025-11-15 10:33:39.502363796 +0000 UTC m=+67.902537495" lastFinishedPulling="2025-11-15 10:34:03.945339784 +0000 UTC m=+92.345513491" observedRunningTime="2025-11-15 10:34:04.468079058 +0000 UTC m=+92.868252757" watchObservedRunningTime="2025-11-15 10:34:06.470246771 +0000 UTC m=+94.870420469"
	Nov 15 10:34:07 addons-800763 kubelet[1288]: I1115 10:34:07.693531    1288 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc4fr\" (UniqueName: \"kubernetes.io/projected/ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5-kube-api-access-zc4fr\") pod \"ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5\" (UID: \"ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5\") "
	Nov 15 10:34:07 addons-800763 kubelet[1288]: I1115 10:34:07.695661    1288 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5-kube-api-access-zc4fr" (OuterVolumeSpecName: "kube-api-access-zc4fr") pod "ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5" (UID: "ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5"). InnerVolumeSpecName "kube-api-access-zc4fr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 10:34:07 addons-800763 kubelet[1288]: I1115 10:34:07.794739    1288 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc4fr\" (UniqueName: \"kubernetes.io/projected/ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5-kube-api-access-zc4fr\") on node \"addons-800763\" DevicePath \"\""
	Nov 15 10:34:08 addons-800763 kubelet[1288]: I1115 10:34:08.465556    1288 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2055679faa3c202ba74e21f362e83b7c1fac83089e03178cada96b272edaf55"
	Nov 15 10:34:08 addons-800763 kubelet[1288]: I1115 10:34:08.493666    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-xqsc5" podStartSLOduration=66.151898902 podStartE2EDuration="1m26.493646314s" podCreationTimestamp="2025-11-15 10:32:42 +0000 UTC" firstStartedPulling="2025-11-15 10:33:47.151973146 +0000 UTC m=+75.552146861" lastFinishedPulling="2025-11-15 10:34:07.493720574 +0000 UTC m=+95.893894273" observedRunningTime="2025-11-15 10:34:08.489076077 +0000 UTC m=+96.889249792" watchObservedRunningTime="2025-11-15 10:34:08.493646314 +0000 UTC m=+96.893820013"
	Nov 15 10:34:11 addons-800763 kubelet[1288]: I1115 10:34:11.498151    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-st9nz" podStartSLOduration=66.271101747 podStartE2EDuration="1m24.498132647s" podCreationTimestamp="2025-11-15 10:32:47 +0000 UTC" firstStartedPulling="2025-11-15 10:33:52.532128263 +0000 UTC m=+80.932301961" lastFinishedPulling="2025-11-15 10:34:10.759159162 +0000 UTC m=+99.159332861" observedRunningTime="2025-11-15 10:34:11.497592923 +0000 UTC m=+99.897766622" watchObservedRunningTime="2025-11-15 10:34:11.498132647 +0000 UTC m=+99.898306346"
	Nov 15 10:34:14 addons-800763 kubelet[1288]: I1115 10:34:14.917601    1288 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 15 10:34:14 addons-800763 kubelet[1288]: I1115 10:34:14.917659    1288 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 15 10:34:18 addons-800763 kubelet[1288]: I1115 10:34:18.091565    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-b4dh9" podStartSLOduration=2.596377642 podStartE2EDuration="1m0.091543752s" podCreationTimestamp="2025-11-15 10:33:18 +0000 UTC" firstStartedPulling="2025-11-15 10:33:19.847815272 +0000 UTC m=+48.247988971" lastFinishedPulling="2025-11-15 10:34:17.342981382 +0000 UTC m=+105.743155081" observedRunningTime="2025-11-15 10:34:17.56279848 +0000 UTC m=+105.962972204" watchObservedRunningTime="2025-11-15 10:34:18.091543752 +0000 UTC m=+106.491717459"
	Nov 15 10:34:19 addons-800763 kubelet[1288]: I1115 10:34:19.725497    1288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f14ec2a9-108c-4233-9b92-9b714d3d02f2" path="/var/lib/kubelet/pods/f14ec2a9-108c-4233-9b92-9b714d3d02f2/volumes"
	Nov 15 10:34:20 addons-800763 kubelet[1288]: I1115 10:34:20.106907    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4586036d-8a43-480f-b6fc-9fa267e5a0d7-gcp-creds\") pod \"busybox\" (UID: \"4586036d-8a43-480f-b6fc-9fa267e5a0d7\") " pod="default/busybox"
	Nov 15 10:34:20 addons-800763 kubelet[1288]: I1115 10:34:20.107001    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q67km\" (UniqueName: \"kubernetes.io/projected/4586036d-8a43-480f-b6fc-9fa267e5a0d7-kube-api-access-q67km\") pod \"busybox\" (UID: \"4586036d-8a43-480f-b6fc-9fa267e5a0d7\") " pod="default/busybox"
	Nov 15 10:34:20 addons-800763 kubelet[1288]: W1115 10:34:20.369071    1288 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b45b50a3734306028a984e6df5a25ecbc9f7a34b679e16d6e004231de7811450/crio-0661af5accbd067f59ba7cdb547881040f90a95a45b5cf655cf71987cc8d3382 WatchSource:0}: Error finding container 0661af5accbd067f59ba7cdb547881040f90a95a45b5cf655cf71987cc8d3382: Status 404 returned error can't find the container with id 0661af5accbd067f59ba7cdb547881040f90a95a45b5cf655cf71987cc8d3382
	Nov 15 10:34:22 addons-800763 kubelet[1288]: E1115 10:34:22.831687    1288 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 15 10:34:22 addons-800763 kubelet[1288]: E1115 10:34:22.831810    1288 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee5928cb-0522-4d75-86c9-719f510099ea-gcr-creds podName:ee5928cb-0522-4d75-86c9-719f510099ea nodeName:}" failed. No retries permitted until 2025-11-15 10:35:26.831791636 +0000 UTC m=+175.231965343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ee5928cb-0522-4d75-86c9-719f510099ea-gcr-creds") pod "registry-creds-764b6fb674-66shb" (UID: "ee5928cb-0522-4d75-86c9-719f510099ea") : secret "registry-creds-gcr" not found
	Nov 15 10:34:23 addons-800763 kubelet[1288]: I1115 10:34:23.579470    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.450117643 podStartE2EDuration="3.579451001s" podCreationTimestamp="2025-11-15 10:34:20 +0000 UTC" firstStartedPulling="2025-11-15 10:34:20.371477068 +0000 UTC m=+108.771650767" lastFinishedPulling="2025-11-15 10:34:22.500810418 +0000 UTC m=+110.900984125" observedRunningTime="2025-11-15 10:34:23.579362401 +0000 UTC m=+111.979536124" watchObservedRunningTime="2025-11-15 10:34:23.579451001 +0000 UTC m=+111.979624708"
	Nov 15 10:34:31 addons-800763 kubelet[1288]: I1115 10:34:31.777368    1288 scope.go:117] "RemoveContainer" containerID="8e56e2e5b14ffdebc3987bbf3fec0b7c4633192c0c395ec5f04ca5eb20de86b5"
	Nov 15 10:34:31 addons-800763 kubelet[1288]: E1115 10:34:31.907982    1288 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4956354d32e019ac7e69cd49068c6953c663f12f1facda2649e65e4eaf4df38e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4956354d32e019ac7e69cd49068c6953c663f12f1facda2649e65e4eaf4df38e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-lfd45_ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-lfd45_ff9f0f59-91ca-47f4-8e38-b4ac3052d2c5/patch/1.log: no such file or directory
	
	
	==> storage-provisioner [4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d] <==
	W1115 10:34:08.524184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:10.528243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:10.536396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:12.542342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:12.549330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:14.552473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:14.560509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:16.585641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:16.635260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:18.638480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:18.643295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:20.648005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:20.653157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:22.656332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:22.663781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:24.666577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:24.671184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:26.674713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:26.679332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:28.682897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:28.690038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:30.696322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:30.701926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:32.705059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:32.709199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-800763 -n addons-800763
helpers_test.go:269: (dbg) Run:  kubectl --context addons-800763 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-lfd45 ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl registry-creds-764b6fb674-66shb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-800763 describe pod gcp-auth-certs-patch-lfd45 ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl registry-creds-764b6fb674-66shb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-800763 describe pod gcp-auth-certs-patch-lfd45 ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl registry-creds-764b6fb674-66shb: exit status 1 (102.526758ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-lfd45" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-9p9nh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cz8cl" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-66shb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-800763 describe pod gcp-auth-certs-patch-lfd45 ingress-nginx-admission-create-9p9nh ingress-nginx-admission-patch-cz8cl registry-creds-764b6fb674-66shb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable headlamp --alsologtostderr -v=1: exit status 11 (284.191481ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:34:33.987460  593895 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:34:33.988195  593895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:33.988215  593895 out.go:374] Setting ErrFile to fd 2...
	I1115 10:34:33.988220  593895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:34:33.988555  593895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:34:33.988939  593895 mustload.go:66] Loading cluster: addons-800763
	I1115 10:34:33.989557  593895 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:33.989600  593895 addons.go:607] checking whether the cluster is paused
	I1115 10:34:33.989721  593895 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:33.989737  593895 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:34:33.990246  593895 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:34:34.013698  593895 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:34.013766  593895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:34:34.033978  593895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:34:34.140026  593895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:34.140115  593895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:34.186428  593895 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:34:34.186447  593895 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:34:34.186452  593895 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:34:34.186457  593895 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:34:34.186460  593895 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:34:34.186464  593895 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:34:34.186467  593895 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:34:34.186470  593895 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:34:34.186473  593895 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:34:34.186479  593895 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:34:34.186482  593895 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:34:34.186486  593895 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:34:34.186488  593895 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:34:34.186491  593895 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:34:34.186495  593895 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:34:34.186502  593895 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:34:34.186506  593895 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:34:34.186510  593895 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:34:34.186513  593895 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:34:34.186517  593895 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:34:34.186521  593895 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:34:34.186524  593895 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:34:34.186527  593895 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:34:34.186530  593895 cri.go:89] found id: ""
	I1115 10:34:34.186581  593895 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:34:34.203076  593895 out.go:203] 
	W1115 10:34:34.206111  593895 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:34:34.206141  593895 out.go:285] * 
	* 
	W1115 10:34:34.211786  593895 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:34:34.214790  593895 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-rkrsf" [b0dc2d4a-9619-4ea9-b5c9-5044a31d90dd] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003123847s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (265.932551ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:38.721622  595821 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:38.722301  595821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:38.722312  595821 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:38.722317  595821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:38.723401  595821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:35:38.723723  595821 mustload.go:66] Loading cluster: addons-800763
	I1115 10:35:38.724096  595821 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:38.724113  595821 addons.go:607] checking whether the cluster is paused
	I1115 10:35:38.724219  595821 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:38.724234  595821 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:35:38.724681  595821 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:35:38.741843  595821 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:38.741897  595821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:35:38.763668  595821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:35:38.867797  595821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:38.867902  595821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:38.903582  595821 cri.go:89] found id: "d3543699e4bc72bf71f683547c69e296d032115f6d5c65d292d694f0fd4a0ca0"
	I1115 10:35:38.903612  595821 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:35:38.903616  595821 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:35:38.903621  595821 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:35:38.903624  595821 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:35:38.903627  595821 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:35:38.903631  595821 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:35:38.903641  595821 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:35:38.903645  595821 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:35:38.903651  595821 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:35:38.903659  595821 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:35:38.903662  595821 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:35:38.903666  595821 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:35:38.903669  595821 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:35:38.903672  595821 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:35:38.903681  595821 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:35:38.903689  595821 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:35:38.903694  595821 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:35:38.903698  595821 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:35:38.903701  595821 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:35:38.903706  595821 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:35:38.903716  595821 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:35:38.903723  595821 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:35:38.903726  595821 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:35:38.903729  595821 cri.go:89] found id: ""
	I1115 10:35:38.903792  595821 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:38.919995  595821 out.go:203] 
	W1115 10:35:38.924066  595821 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:38.924093  595821 out.go:285] * 
	* 
	W1115 10:35:38.929819  595821 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:38.932742  595821 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-800763 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-800763 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-800763 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f3207b0c-a9ff-4d41-9cac-8052ff3d8d05] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f3207b0c-a9ff-4d41-9cac-8052ff3d8d05] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f3207b0c-a9ff-4d41-9cac-8052ff3d8d05] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00318806s
addons_test.go:967: (dbg) Run:  kubectl --context addons-800763 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 ssh "cat /opt/local-path-provisioner/pvc-a60dc574-d334-43fd-b1ee-4958d621bb8e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-800763 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-800763 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (302.12855ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:32.415650  595630 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:32.416530  595630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:32.416591  595630 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:32.416612  595630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:32.417056  595630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:35:32.417460  595630 mustload.go:66] Loading cluster: addons-800763
	I1115 10:35:32.417969  595630 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:32.418014  595630 addons.go:607] checking whether the cluster is paused
	I1115 10:35:32.418180  595630 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:32.418215  595630 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:35:32.418720  595630 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:35:32.441035  595630 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:32.441097  595630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:35:32.462328  595630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:35:32.571955  595630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:32.572071  595630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:32.605415  595630 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:35:32.605440  595630 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:35:32.605446  595630 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:35:32.605449  595630 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:35:32.605453  595630 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:35:32.605457  595630 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:35:32.605482  595630 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:35:32.605490  595630 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:35:32.605495  595630 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:35:32.605509  595630 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:35:32.605519  595630 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:35:32.605522  595630 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:35:32.605526  595630 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:35:32.605529  595630 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:35:32.605533  595630 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:35:32.605561  595630 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:35:32.605569  595630 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:35:32.605577  595630 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:35:32.605581  595630 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:35:32.605584  595630 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:35:32.605588  595630 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:35:32.605592  595630 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:35:32.605595  595630 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:35:32.605598  595630 cri.go:89] found id: ""
	I1115 10:35:32.605673  595630 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:32.639578  595630 out.go:203] 
	W1115 10:35:32.642598  595630 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:32.642689  595630 out.go:285] * 
	* 
	W1115 10:35:32.648527  595630 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:32.655501  595630 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hc67v" [b43ea056-fc7f-4902-9fbc-05323afb5fcb] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003443607s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (269.268537ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:18.672180  595260 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:18.673210  595260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:18.673229  595260 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:18.673235  595260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:18.673629  595260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:35:18.674016  595260 mustload.go:66] Loading cluster: addons-800763
	I1115 10:35:18.674689  595260 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:18.674707  595260 addons.go:607] checking whether the cluster is paused
	I1115 10:35:18.674847  595260 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:18.674865  595260 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:35:18.675549  595260 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:35:18.695343  595260 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:18.695420  595260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:35:18.713769  595260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:35:18.819798  595260 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:18.819894  595260 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:18.850516  595260 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:35:18.850539  595260 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:35:18.850548  595260 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:35:18.850552  595260 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:35:18.850555  595260 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:35:18.850558  595260 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:35:18.850561  595260 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:35:18.850565  595260 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:35:18.850568  595260 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:35:18.850575  595260 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:35:18.850578  595260 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:35:18.850581  595260 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:35:18.850584  595260 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:35:18.850588  595260 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:35:18.850592  595260 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:35:18.850596  595260 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:35:18.850601  595260 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:35:18.850605  595260 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:35:18.850608  595260 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:35:18.850611  595260 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:35:18.850616  595260 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:35:18.850622  595260 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:35:18.850626  595260 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:35:18.850629  595260 cri.go:89] found id: ""
	I1115 10:35:18.850690  595260 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:18.867754  595260 out.go:203] 
	W1115 10:35:18.870666  595260 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:18.870691  595260 out.go:285] * 
	* 
	W1115 10:35:18.876634  595260 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:18.880042  595260 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2phbk" [8c5381d4-f93d-4ad8-8c0e-101bf47ae80f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003879874s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-800763 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-800763 addons disable yakd --alsologtostderr -v=1: exit status 11 (268.886328ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:35:23.944187  595321 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:23.945000  595321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:23.945044  595321 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:23.945065  595321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:23.945471  595321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:35:23.945832  595321 mustload.go:66] Loading cluster: addons-800763
	I1115 10:35:23.946248  595321 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:23.946293  595321 addons.go:607] checking whether the cluster is paused
	I1115 10:35:23.946427  595321 config.go:182] Loaded profile config "addons-800763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:23.946469  595321 host.go:66] Checking if "addons-800763" exists ...
	I1115 10:35:23.946939  595321 cli_runner.go:164] Run: docker container inspect addons-800763 --format={{.State.Status}}
	I1115 10:35:23.965166  595321 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:23.965226  595321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-800763
	I1115 10:35:23.984347  595321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/addons-800763/id_rsa Username:docker}
	I1115 10:35:24.096021  595321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:24.096123  595321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:24.126327  595321 cri.go:89] found id: "a9042d386fbd23428bfef77a08422fbd12ee1188707972dd7603752f84c6a1f4"
	I1115 10:35:24.126355  595321 cri.go:89] found id: "58a3d7ae99c21fa9e10671714629d102aedcbb557884cd0461bb573d825aad7c"
	I1115 10:35:24.126361  595321 cri.go:89] found id: "cf788ff5ca9edfa48d313bbb44fd65a24d6d4647443c5138e3c58c9dc93347fa"
	I1115 10:35:24.126365  595321 cri.go:89] found id: "b92e7600078c6c2338793b2408ae5e41e5cd9c22b9e4688c0b6f9a595310f5ad"
	I1115 10:35:24.126369  595321 cri.go:89] found id: "242745a7e2fe04d2dd488037d50d4b9beca0ced671ec77dd9da25bb5368dcdf5"
	I1115 10:35:24.126373  595321 cri.go:89] found id: "eb2c40f0693a3b47abfc7c86cdd5ae0adda52063197dca7eada979239b4a4b43"
	I1115 10:35:24.126395  595321 cri.go:89] found id: "b7438622a38674b689ad7538adbf9012441413e3ce1beaaea28a74d5ec51ac2a"
	I1115 10:35:24.126405  595321 cri.go:89] found id: "a4dbbc00dd9926bf6218cb8057cfdfb5891a10abb6e5276c3dd8574cd7663cfd"
	I1115 10:35:24.126409  595321 cri.go:89] found id: "fcf398de5ed70973c145067589592e334dce4fb1278e0b1f804c3ec6e393f237"
	I1115 10:35:24.126460  595321 cri.go:89] found id: "63983c9057a45bb49ea47c5ef7b7db625c5b07165328ba14286ff1e6bfcb9484"
	I1115 10:35:24.126472  595321 cri.go:89] found id: "0c61d36c7a5113dcbd83c8d456e7be04379d41664ebf277a7e8130b86842914f"
	I1115 10:35:24.126476  595321 cri.go:89] found id: "3ca4db6a78c91ebe2e07275de7f10b2fb52e75c56b0fd7f08da6f5c41cc5da4b"
	I1115 10:35:24.126479  595321 cri.go:89] found id: "17bb62b0a1fd74988f17de70809d3d31ce983f6b82bfec50464b0d552cbd1583"
	I1115 10:35:24.126482  595321 cri.go:89] found id: "d8aa5125c640c3d2c94e38a61b1e9aaddbeca4c77ba22be0be1cbd1fad74ed4e"
	I1115 10:35:24.126486  595321 cri.go:89] found id: "a04a62ebc8233f10a6b39b120dc07d61ca227712f5f6b69a06e6aab515baca0a"
	I1115 10:35:24.126495  595321 cri.go:89] found id: "4543ce964ae98777d6aa2119ecfeae1edba6a8e3c4666457f76aa2e82ff73d4d"
	I1115 10:35:24.126507  595321 cri.go:89] found id: "1709e904357b79b10eb15644daa8837abd44da7a9d8546f4ffeb551b88d9e35f"
	I1115 10:35:24.126512  595321 cri.go:89] found id: "60697a03970e441c14e92f523b4c02bbcac776a75085cb34c5a70bbd0c5a3108"
	I1115 10:35:24.126515  595321 cri.go:89] found id: "2cdb176563f24a57a9c913c8e94101f128f8c08ec1d9d879d43fc49dae3e763e"
	I1115 10:35:24.126518  595321 cri.go:89] found id: "d105de9b64fd925b801e9ea357655892655ee2b81acd50f97de316e1a927b0e0"
	I1115 10:35:24.126533  595321 cri.go:89] found id: "33e51fd6419d37e26c833866ba5374b3cc028c3be8e36fe10c9fc6fab804ab44"
	I1115 10:35:24.126538  595321 cri.go:89] found id: "7cfdcbc77bbddf6fd0fe03f04579d583f3aba507c11391503be4f413e6651a5f"
	I1115 10:35:24.126541  595321 cri.go:89] found id: "7d2cf8e9b9a682cf1e896fc7df1479c696a2ba09e1c3dda07ac90290301aff15"
	I1115 10:35:24.126544  595321 cri.go:89] found id: ""
	I1115 10:35:24.126613  595321 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:35:24.142290  595321 out.go:203] 
	W1115 10:35:24.145245  595321 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:35:24.145271  595321 out.go:285] * 
	* 
	W1115 10:35:24.151049  595321 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:35:24.154144  595321 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-800763 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-385299 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-385299 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-njwzv" [7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-385299 -n functional-385299
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-15 10:51:31.854041961 +0000 UTC m=+1208.212267770
functional_test.go:1645: (dbg) Run:  kubectl --context functional-385299 describe po hello-node-connect-7d85dfc575-njwzv -n default
functional_test.go:1645: (dbg) kubectl --context functional-385299 describe po hello-node-connect-7d85dfc575-njwzv -n default:
Name:             hello-node-connect-7d85dfc575-njwzv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-385299/192.168.49.2
Start Time:       Sat, 15 Nov 2025 10:41:31 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ghfsv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ghfsv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-njwzv to functional-385299
Normal   Pulling    7m8s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m57s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m57s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m45s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m45s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-385299 logs hello-node-connect-7d85dfc575-njwzv -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-385299 logs hello-node-connect-7d85dfc575-njwzv -n default: exit status 1 (115.922ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-njwzv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-385299 logs hello-node-connect-7d85dfc575-njwzv -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-385299 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-njwzv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-385299/192.168.49.2
Start Time:       Sat, 15 Nov 2025 10:41:31 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ghfsv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ghfsv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-njwzv to functional-385299
Normal   Pulling    7m9s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m46s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m46s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-385299 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-385299 logs -l app=hello-node-connect: exit status 1 (88.807406ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-njwzv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-385299 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-385299 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.210.96
IPs:                      10.109.210.96
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30906/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-385299
helpers_test.go:243: (dbg) docker inspect functional-385299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3853721bb4c66f1e59df59bb8c4ab019f4b45fdd3ad158ad75ebb7eb7789604f",
	        "Created": "2025-11-15T10:38:40.147818667Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 602214,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:38:40.223376895Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/3853721bb4c66f1e59df59bb8c4ab019f4b45fdd3ad158ad75ebb7eb7789604f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3853721bb4c66f1e59df59bb8c4ab019f4b45fdd3ad158ad75ebb7eb7789604f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3853721bb4c66f1e59df59bb8c4ab019f4b45fdd3ad158ad75ebb7eb7789604f/hosts",
	        "LogPath": "/var/lib/docker/containers/3853721bb4c66f1e59df59bb8c4ab019f4b45fdd3ad158ad75ebb7eb7789604f/3853721bb4c66f1e59df59bb8c4ab019f4b45fdd3ad158ad75ebb7eb7789604f-json.log",
	        "Name": "/functional-385299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-385299:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-385299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3853721bb4c66f1e59df59bb8c4ab019f4b45fdd3ad158ad75ebb7eb7789604f",
	                "LowerDir": "/var/lib/docker/overlay2/7142dd4ec06d621ed3d500d80fb4a970cc46c441922615fb7c8f08d74b689a24-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7142dd4ec06d621ed3d500d80fb4a970cc46c441922615fb7c8f08d74b689a24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7142dd4ec06d621ed3d500d80fb4a970cc46c441922615fb7c8f08d74b689a24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7142dd4ec06d621ed3d500d80fb4a970cc46c441922615fb7c8f08d74b689a24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-385299",
	                "Source": "/var/lib/docker/volumes/functional-385299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-385299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-385299",
	                "name.minikube.sigs.k8s.io": "functional-385299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d0325054675a119f74fa479a6cafe53d7c67755e70051c98e3e6a452d798b2e",
	            "SandboxKey": "/var/run/docker/netns/5d0325054675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-385299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9a:31:0d:3e:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0337819053ba4b70b234036d608464a57adffb457f77a99fe4805874bd906d51",
	                    "EndpointID": "e16d4a8784509dbe7d05d1e31f77a21c46913bfcc5f5ebdc891198a891481af6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-385299",
	                        "3853721bb4c6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-385299 -n functional-385299
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 logs -n 25: (1.468092999s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-385299 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:40 UTC │ 15 Nov 25 10:40 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 15 Nov 25 10:40 UTC │ 15 Nov 25 10:40 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 15 Nov 25 10:40 UTC │ 15 Nov 25 10:40 UTC │
	│ kubectl │ functional-385299 kubectl -- --context functional-385299 get pods                                                          │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:40 UTC │ 15 Nov 25 10:40 UTC │
	│ start   │ -p functional-385299 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:40 UTC │ 15 Nov 25 10:41 UTC │
	│ service │ invalid-svc -p functional-385299                                                                                           │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │                     │
	│ config  │ functional-385299 config unset cpus                                                                                        │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ cp      │ functional-385299 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ config  │ functional-385299 config get cpus                                                                                          │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │                     │
	│ config  │ functional-385299 config set cpus 2                                                                                        │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ config  │ functional-385299 config get cpus                                                                                          │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ config  │ functional-385299 config unset cpus                                                                                        │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ ssh     │ functional-385299 ssh -n functional-385299 sudo cat /home/docker/cp-test.txt                                               │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ config  │ functional-385299 config get cpus                                                                                          │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │                     │
	│ ssh     │ functional-385299 ssh echo hello                                                                                           │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ cp      │ functional-385299 cp functional-385299:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1483635294/001/cp-test.txt │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ ssh     │ functional-385299 ssh cat /etc/hostname                                                                                    │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ ssh     │ functional-385299 ssh -n functional-385299 sudo cat /home/docker/cp-test.txt                                               │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ tunnel  │ functional-385299 tunnel --alsologtostderr                                                                                 │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │                     │
	│ tunnel  │ functional-385299 tunnel --alsologtostderr                                                                                 │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │                     │
	│ cp      │ functional-385299 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ tunnel  │ functional-385299 tunnel --alsologtostderr                                                                                 │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │                     │
	│ ssh     │ functional-385299 ssh -n functional-385299 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ addons  │ functional-385299 addons list                                                                                              │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	│ addons  │ functional-385299 addons list -o json                                                                                      │ functional-385299 │ jenkins │ v1.37.0 │ 15 Nov 25 10:41 UTC │ 15 Nov 25 10:41 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:40:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:40:32.441644  606389 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:40:32.441982  606389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:40:32.441987  606389 out.go:374] Setting ErrFile to fd 2...
	I1115 10:40:32.441991  606389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:40:32.442248  606389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:40:32.442668  606389 out.go:368] Setting JSON to false
	I1115 10:40:32.443683  606389 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8583,"bootTime":1763194649,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:40:32.443747  606389 start.go:143] virtualization:  
	I1115 10:40:32.446990  606389 out.go:179] * [functional-385299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:40:32.450838  606389 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:40:32.450904  606389 notify.go:221] Checking for updates...
	I1115 10:40:32.457869  606389 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:40:32.461679  606389 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:40:32.464594  606389 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:40:32.467375  606389 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:40:32.470230  606389 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:40:32.473639  606389 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:40:32.473739  606389 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:40:32.505002  606389 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:40:32.505100  606389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:40:32.566302  606389 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-15 10:40:32.55662415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:40:32.566397  606389 docker.go:319] overlay module found
	I1115 10:40:32.569575  606389 out.go:179] * Using the docker driver based on existing profile
	I1115 10:40:32.572372  606389 start.go:309] selected driver: docker
	I1115 10:40:32.572382  606389 start.go:930] validating driver "docker" against &{Name:functional-385299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-385299 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:40:32.572489  606389 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:40:32.572590  606389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:40:32.630010  606389 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-15 10:40:32.620684286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:40:32.630447  606389 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:40:32.630477  606389 cni.go:84] Creating CNI manager for ""
	I1115 10:40:32.630538  606389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:40:32.630585  606389 start.go:353] cluster config:
	{Name:functional-385299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-385299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:40:32.635555  606389 out.go:179] * Starting "functional-385299" primary control-plane node in "functional-385299" cluster
	I1115 10:40:32.638365  606389 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:40:32.641370  606389 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:40:32.644242  606389 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:40:32.644284  606389 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:40:32.644318  606389 cache.go:65] Caching tarball of preloaded images
	I1115 10:40:32.644346  606389 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:40:32.644407  606389 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:40:32.644416  606389 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:40:32.644526  606389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/config.json ...
	I1115 10:40:32.665463  606389 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:40:32.665475  606389 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:40:32.665493  606389 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:40:32.665515  606389 start.go:360] acquireMachinesLock for functional-385299: {Name:mk3271c34266b32b17434518c85d884f8afe3946 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:40:32.665591  606389 start.go:364] duration metric: took 46.893µs to acquireMachinesLock for "functional-385299"
	I1115 10:40:32.665610  606389 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:40:32.665615  606389 fix.go:54] fixHost starting: 
	I1115 10:40:32.665892  606389 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
	I1115 10:40:32.683093  606389 fix.go:112] recreateIfNeeded on functional-385299: state=Running err=<nil>
	W1115 10:40:32.683130  606389 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:40:32.686436  606389 out.go:252] * Updating the running docker "functional-385299" container ...
	I1115 10:40:32.686462  606389 machine.go:94] provisionDockerMachine start ...
	I1115 10:40:32.686581  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:32.704537  606389 main.go:143] libmachine: Using SSH client type: native
	I1115 10:40:32.704930  606389 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I1115 10:40:32.704937  606389 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:40:32.860555  606389 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-385299
	
	I1115 10:40:32.860570  606389 ubuntu.go:182] provisioning hostname "functional-385299"
	I1115 10:40:32.860629  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:32.878925  606389 main.go:143] libmachine: Using SSH client type: native
	I1115 10:40:32.879226  606389 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I1115 10:40:32.879235  606389 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-385299 && echo "functional-385299" | sudo tee /etc/hostname
	I1115 10:40:33.055448  606389 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-385299
	
	I1115 10:40:33.055536  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:33.078443  606389 main.go:143] libmachine: Using SSH client type: native
	I1115 10:40:33.078748  606389 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I1115 10:40:33.078762  606389 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-385299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-385299/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-385299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:40:33.237339  606389 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:40:33.237355  606389 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:40:33.237373  606389 ubuntu.go:190] setting up certificates
	I1115 10:40:33.237381  606389 provision.go:84] configureAuth start
	I1115 10:40:33.237440  606389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-385299
	I1115 10:40:33.258326  606389 provision.go:143] copyHostCerts
	I1115 10:40:33.258384  606389 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:40:33.258400  606389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:40:33.258518  606389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:40:33.258636  606389 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:40:33.258641  606389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:40:33.258670  606389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:40:33.258733  606389 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:40:33.258736  606389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:40:33.258758  606389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:40:33.258817  606389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.functional-385299 san=[127.0.0.1 192.168.49.2 functional-385299 localhost minikube]
	I1115 10:40:33.450638  606389 provision.go:177] copyRemoteCerts
	I1115 10:40:33.450697  606389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:40:33.450734  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:33.469442  606389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:40:33.576777  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:40:33.594274  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:40:33.611742  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:40:33.628851  606389 provision.go:87] duration metric: took 391.441508ms to configureAuth
	I1115 10:40:33.628947  606389 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:40:33.629132  606389 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:40:33.629233  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:33.652314  606389 main.go:143] libmachine: Using SSH client type: native
	I1115 10:40:33.652646  606389 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I1115 10:40:33.652658  606389 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:40:39.027962  606389 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:40:39.027976  606389 machine.go:97] duration metric: took 6.341507819s to provisionDockerMachine
	I1115 10:40:39.027986  606389 start.go:293] postStartSetup for "functional-385299" (driver="docker")
	I1115 10:40:39.027996  606389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:40:39.028056  606389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:40:39.028095  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:39.046573  606389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:40:39.157009  606389 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:40:39.160570  606389 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:40:39.160588  606389 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:40:39.160599  606389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:40:39.160654  606389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:40:39.160733  606389 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:40:39.160807  606389 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/test/nested/copy/586561/hosts -> hosts in /etc/test/nested/copy/586561
	I1115 10:40:39.160880  606389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/586561
	I1115 10:40:39.168783  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:40:39.187010  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/test/nested/copy/586561/hosts --> /etc/test/nested/copy/586561/hosts (40 bytes)
	I1115 10:40:39.204717  606389 start.go:296] duration metric: took 176.716843ms for postStartSetup
	I1115 10:40:39.204788  606389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:40:39.204844  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:39.222393  606389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:40:39.326436  606389 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:40:39.331576  606389 fix.go:56] duration metric: took 6.665954156s for fixHost
	I1115 10:40:39.331592  606389 start.go:83] releasing machines lock for "functional-385299", held for 6.665993148s
	I1115 10:40:39.331660  606389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-385299
	I1115 10:40:39.349518  606389 ssh_runner.go:195] Run: cat /version.json
	I1115 10:40:39.349559  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:39.349564  606389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:40:39.349625  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:40:39.367568  606389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:40:39.373001  606389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:40:39.555179  606389 ssh_runner.go:195] Run: systemctl --version
	I1115 10:40:39.561717  606389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:40:39.600906  606389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:40:39.605429  606389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:40:39.605496  606389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:40:39.613477  606389 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:40:39.613491  606389 start.go:496] detecting cgroup driver to use...
	I1115 10:40:39.613521  606389 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:40:39.613567  606389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:40:39.629793  606389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:40:39.643480  606389 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:40:39.643533  606389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:40:39.660017  606389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:40:39.673623  606389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:40:39.818923  606389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:40:39.964449  606389 docker.go:234] disabling docker service ...
	I1115 10:40:39.964509  606389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:40:39.980324  606389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:40:39.993985  606389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:40:40.166133  606389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:40:40.310080  606389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:40:40.323236  606389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:40:40.337261  606389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:40:40.337313  606389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:40:40.346270  606389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:40:40.346343  606389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:40:40.355102  606389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:40:40.363888  606389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:40:40.373341  606389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:40:40.382590  606389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:40:40.391727  606389 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:40:40.400761  606389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:40:40.410835  606389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:40:40.418848  606389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:40:40.426526  606389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:40:40.564498  606389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:40:47.490199  606389 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.925678419s)
	I1115 10:40:47.490215  606389 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:40:47.490262  606389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:40:47.494165  606389 start.go:564] Will wait 60s for crictl version
	I1115 10:40:47.494219  606389 ssh_runner.go:195] Run: which crictl
	I1115 10:40:47.497778  606389 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:40:47.529044  606389 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:40:47.529120  606389 ssh_runner.go:195] Run: crio --version
	I1115 10:40:47.559079  606389 ssh_runner.go:195] Run: crio --version
	I1115 10:40:47.593638  606389 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:40:47.596606  606389 cli_runner.go:164] Run: docker network inspect functional-385299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:40:47.612691  606389 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:40:47.619958  606389 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1115 10:40:47.623034  606389 kubeadm.go:884] updating cluster {Name:functional-385299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-385299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:40:47.623151  606389 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:40:47.623221  606389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:40:47.656717  606389 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:40:47.656729  606389 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:40:47.656784  606389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:40:47.682624  606389 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:40:47.682636  606389 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:40:47.682642  606389 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1115 10:40:47.682750  606389 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-385299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-385299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:40:47.682837  606389 ssh_runner.go:195] Run: crio config
	I1115 10:40:47.739250  606389 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1115 10:40:47.739268  606389 cni.go:84] Creating CNI manager for ""
	I1115 10:40:47.739277  606389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:40:47.739286  606389 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:40:47.739308  606389 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-385299 NodeName:functional-385299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:40:47.739431  606389 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-385299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:40:47.739499  606389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:40:47.747674  606389 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:40:47.747737  606389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:40:47.755380  606389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:40:47.768029  606389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:40:47.780741  606389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1115 10:40:47.793622  606389 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:40:47.797179  606389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:40:47.937398  606389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:40:47.951814  606389 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299 for IP: 192.168.49.2
	I1115 10:40:47.951825  606389 certs.go:195] generating shared ca certs ...
	I1115 10:40:47.951838  606389 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:40:47.951993  606389 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:40:47.952030  606389 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:40:47.952035  606389 certs.go:257] generating profile certs ...
	I1115 10:40:47.952126  606389 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.key
	I1115 10:40:47.952170  606389 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/apiserver.key.c7b9c565
	I1115 10:40:47.952206  606389 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/proxy-client.key
	I1115 10:40:47.952314  606389 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:40:47.952342  606389 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:40:47.952348  606389 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:40:47.952374  606389 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:40:47.952394  606389 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:40:47.952412  606389 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:40:47.952461  606389 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:40:47.953103  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:40:47.971168  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:40:47.988651  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:40:48.006705  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:40:48.031341  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:40:48.051609  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:40:48.069537  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:40:48.087488  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:40:48.105363  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:40:48.123463  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:40:48.141484  606389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:40:48.159241  606389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:40:48.172462  606389 ssh_runner.go:195] Run: openssl version
	I1115 10:40:48.178838  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:40:48.187628  606389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:40:48.191347  606389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:40:48.191403  606389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:40:48.232009  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:40:48.239803  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:40:48.248020  606389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:40:48.251845  606389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:40:48.251901  606389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:40:48.293176  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:40:48.301200  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:40:48.309624  606389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:40:48.313351  606389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:40:48.313405  606389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:40:48.354474  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:40:48.363560  606389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:40:48.367740  606389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:40:48.408944  606389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:40:48.451187  606389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:40:48.492470  606389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:40:48.534111  606389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:40:48.575082  606389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:40:48.617161  606389 kubeadm.go:401] StartCluster: {Name:functional-385299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-385299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:40:48.617259  606389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:40:48.617341  606389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:40:48.646722  606389 cri.go:89] found id: "0fcfd68b3ba239953582173417e7e6f77211022e7a6f520bf701736bdcc60dac"
	I1115 10:40:48.646733  606389 cri.go:89] found id: "021c29353d3641886fbee2dfe62a13165755c20d5b0ed83dbe2ee0c6306360c2"
	I1115 10:40:48.646737  606389 cri.go:89] found id: "d8695fd47fae31e29eb3b6a92c174af74b33e1c71a5f3999010d9a49685f8566"
	I1115 10:40:48.646741  606389 cri.go:89] found id: "f4a1a5557f4c6ad15c0f05026148d6d644522b81cb89efed34c83cdefce3d063"
	I1115 10:40:48.646744  606389 cri.go:89] found id: "959e9250219dd90bdf89b2cd9db068ce5420d53d3efd2db67417a3d796bf8e66"
	I1115 10:40:48.646747  606389 cri.go:89] found id: "d926cb43d3437f31e9e8cab626660c7b0cbb805af9685e900e12eceeafb3c6c6"
	I1115 10:40:48.646749  606389 cri.go:89] found id: "aed3921d456659a0bc57cf7b2204ee7d3f2cccfee83ba7b25d91878603b28629"
	I1115 10:40:48.646751  606389 cri.go:89] found id: "7013fd9da8847c8882227855fc966353151c05a8b2342bb04fc720f3ae77fbb1"
	I1115 10:40:48.646753  606389 cri.go:89] found id: "ad0822d6947568c8356741dc1e4d310d4496322207c5de1485b46a7cc3d242ea"
	I1115 10:40:48.646760  606389 cri.go:89] found id: "61986f1432f3b33bcb563d2160872027b3af7ed7923c29f6826c1c9e4dc95101"
	I1115 10:40:48.646762  606389 cri.go:89] found id: "df8cf85555e14e67b28919957f80c4eea2546db8358f5e463839ee34a9c67194"
	I1115 10:40:48.646774  606389 cri.go:89] found id: "a7d978326e4799b905bc05a3dc98aebcbb8aec2aa613e2cc1788539f337568d1"
	I1115 10:40:48.646776  606389 cri.go:89] found id: "0fbf0536cbedbdb16d158902990bbd7cb6add0ffd12d1337161cc563203f5f32"
	I1115 10:40:48.646778  606389 cri.go:89] found id: "daa943d4469c405cc2895fdffc22c60e870ed32e47f8bfa72a613afb6ae78e3b"
	I1115 10:40:48.646780  606389 cri.go:89] found id: "101d0d4279ea639335fb743dbddc50d9f3b77b931407ef666bc005cb3b25d0e6"
	I1115 10:40:48.646785  606389 cri.go:89] found id: "1857f19ed1512da69c6e6a075eff2d9d3d135c1e01a32e5f665d70dc10ed6c81"
	I1115 10:40:48.646787  606389 cri.go:89] found id: ""
	I1115 10:40:48.646837  606389 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:40:48.658083  606389 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:40:48Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:40:48.658157  606389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:40:48.666159  606389 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:40:48.666168  606389 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:40:48.666221  606389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:40:48.673631  606389 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:40:48.674153  606389 kubeconfig.go:125] found "functional-385299" server: "https://192.168.49.2:8441"
	I1115 10:40:48.675552  606389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:40:48.683617  606389 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-15 10:38:49.693271123 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-15 10:40:47.786651619 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1115 10:40:48.683627  606389 kubeadm.go:1161] stopping kube-system containers ...
	I1115 10:40:48.683639  606389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1115 10:40:48.683715  606389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:40:48.713249  606389 cri.go:89] found id: "0fcfd68b3ba239953582173417e7e6f77211022e7a6f520bf701736bdcc60dac"
	I1115 10:40:48.713260  606389 cri.go:89] found id: "021c29353d3641886fbee2dfe62a13165755c20d5b0ed83dbe2ee0c6306360c2"
	I1115 10:40:48.713264  606389 cri.go:89] found id: "d8695fd47fae31e29eb3b6a92c174af74b33e1c71a5f3999010d9a49685f8566"
	I1115 10:40:48.713267  606389 cri.go:89] found id: "f4a1a5557f4c6ad15c0f05026148d6d644522b81cb89efed34c83cdefce3d063"
	I1115 10:40:48.713269  606389 cri.go:89] found id: "959e9250219dd90bdf89b2cd9db068ce5420d53d3efd2db67417a3d796bf8e66"
	I1115 10:40:48.713272  606389 cri.go:89] found id: "d926cb43d3437f31e9e8cab626660c7b0cbb805af9685e900e12eceeafb3c6c6"
	I1115 10:40:48.713274  606389 cri.go:89] found id: "aed3921d456659a0bc57cf7b2204ee7d3f2cccfee83ba7b25d91878603b28629"
	I1115 10:40:48.713276  606389 cri.go:89] found id: "7013fd9da8847c8882227855fc966353151c05a8b2342bb04fc720f3ae77fbb1"
	I1115 10:40:48.713278  606389 cri.go:89] found id: "ad0822d6947568c8356741dc1e4d310d4496322207c5de1485b46a7cc3d242ea"
	I1115 10:40:48.713283  606389 cri.go:89] found id: "61986f1432f3b33bcb563d2160872027b3af7ed7923c29f6826c1c9e4dc95101"
	I1115 10:40:48.713297  606389 cri.go:89] found id: "df8cf85555e14e67b28919957f80c4eea2546db8358f5e463839ee34a9c67194"
	I1115 10:40:48.713300  606389 cri.go:89] found id: "a7d978326e4799b905bc05a3dc98aebcbb8aec2aa613e2cc1788539f337568d1"
	I1115 10:40:48.713302  606389 cri.go:89] found id: "0fbf0536cbedbdb16d158902990bbd7cb6add0ffd12d1337161cc563203f5f32"
	I1115 10:40:48.713304  606389 cri.go:89] found id: "daa943d4469c405cc2895fdffc22c60e870ed32e47f8bfa72a613afb6ae78e3b"
	I1115 10:40:48.713306  606389 cri.go:89] found id: "101d0d4279ea639335fb743dbddc50d9f3b77b931407ef666bc005cb3b25d0e6"
	I1115 10:40:48.713309  606389 cri.go:89] found id: "1857f19ed1512da69c6e6a075eff2d9d3d135c1e01a32e5f665d70dc10ed6c81"
	I1115 10:40:48.713311  606389 cri.go:89] found id: ""
	I1115 10:40:48.713316  606389 cri.go:252] Stopping containers: [0fcfd68b3ba239953582173417e7e6f77211022e7a6f520bf701736bdcc60dac 021c29353d3641886fbee2dfe62a13165755c20d5b0ed83dbe2ee0c6306360c2 d8695fd47fae31e29eb3b6a92c174af74b33e1c71a5f3999010d9a49685f8566 f4a1a5557f4c6ad15c0f05026148d6d644522b81cb89efed34c83cdefce3d063 959e9250219dd90bdf89b2cd9db068ce5420d53d3efd2db67417a3d796bf8e66 d926cb43d3437f31e9e8cab626660c7b0cbb805af9685e900e12eceeafb3c6c6 aed3921d456659a0bc57cf7b2204ee7d3f2cccfee83ba7b25d91878603b28629 7013fd9da8847c8882227855fc966353151c05a8b2342bb04fc720f3ae77fbb1 ad0822d6947568c8356741dc1e4d310d4496322207c5de1485b46a7cc3d242ea 61986f1432f3b33bcb563d2160872027b3af7ed7923c29f6826c1c9e4dc95101 df8cf85555e14e67b28919957f80c4eea2546db8358f5e463839ee34a9c67194 a7d978326e4799b905bc05a3dc98aebcbb8aec2aa613e2cc1788539f337568d1 0fbf0536cbedbdb16d158902990bbd7cb6add0ffd12d1337161cc563203f5f32 daa943d4469c405cc2895fdffc22c60e870ed32e47f8bfa72a613afb6ae78e3b 101d0d4279ea639335fb743dbddc50d9f3b77b931
407ef666bc005cb3b25d0e6 1857f19ed1512da69c6e6a075eff2d9d3d135c1e01a32e5f665d70dc10ed6c81]
	I1115 10:40:48.713392  606389 ssh_runner.go:195] Run: which crictl
	I1115 10:40:48.717205  606389 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 0fcfd68b3ba239953582173417e7e6f77211022e7a6f520bf701736bdcc60dac 021c29353d3641886fbee2dfe62a13165755c20d5b0ed83dbe2ee0c6306360c2 d8695fd47fae31e29eb3b6a92c174af74b33e1c71a5f3999010d9a49685f8566 f4a1a5557f4c6ad15c0f05026148d6d644522b81cb89efed34c83cdefce3d063 959e9250219dd90bdf89b2cd9db068ce5420d53d3efd2db67417a3d796bf8e66 d926cb43d3437f31e9e8cab626660c7b0cbb805af9685e900e12eceeafb3c6c6 aed3921d456659a0bc57cf7b2204ee7d3f2cccfee83ba7b25d91878603b28629 7013fd9da8847c8882227855fc966353151c05a8b2342bb04fc720f3ae77fbb1 ad0822d6947568c8356741dc1e4d310d4496322207c5de1485b46a7cc3d242ea 61986f1432f3b33bcb563d2160872027b3af7ed7923c29f6826c1c9e4dc95101 df8cf85555e14e67b28919957f80c4eea2546db8358f5e463839ee34a9c67194 a7d978326e4799b905bc05a3dc98aebcbb8aec2aa613e2cc1788539f337568d1 0fbf0536cbedbdb16d158902990bbd7cb6add0ffd12d1337161cc563203f5f32 daa943d4469c405cc2895fdffc22c60e870ed32e47f8bfa72a613afb6ae78e3b 101d0d
4279ea639335fb743dbddc50d9f3b77b931407ef666bc005cb3b25d0e6 1857f19ed1512da69c6e6a075eff2d9d3d135c1e01a32e5f665d70dc10ed6c81
	I1115 10:40:48.819006  606389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1115 10:40:48.943828  606389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:40:48.951864  606389 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 15 10:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 15 10:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 15 10:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov 15 10:38 /etc/kubernetes/scheduler.conf
	
	I1115 10:40:48.951953  606389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1115 10:40:48.959973  606389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1115 10:40:48.967730  606389 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:40:48.967795  606389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:40:48.975274  606389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1115 10:40:48.982911  606389 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:40:48.982966  606389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:40:48.990581  606389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1115 10:40:48.998053  606389 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:40:48.998111  606389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:40:49.005530  606389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:40:49.015013  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:40:49.067929  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:40:52.773489  606389 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.70553499s)
	I1115 10:40:52.773547  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:40:52.998144  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:40:53.074469  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:40:53.169986  606389 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:40:53.170055  606389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:40:53.671145  606389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:40:54.170430  606389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:40:54.189295  606389 api_server.go:72] duration metric: took 1.019323369s to wait for apiserver process to appear ...
	I1115 10:40:54.189308  606389 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:40:54.189326  606389 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 10:40:57.452767  606389 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 10:40:57.452783  606389 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 10:40:57.452797  606389 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 10:40:57.490119  606389 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 10:40:57.490135  606389 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 10:40:57.689426  606389 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 10:40:57.700633  606389 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:40:57.700669  606389 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:40:58.190014  606389 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 10:40:58.201701  606389 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:40:58.201719  606389 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:40:58.689361  606389 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 10:40:58.698833  606389 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:40:58.698848  606389 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:40:59.190130  606389 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 10:40:59.201706  606389 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1115 10:40:59.216457  606389 api_server.go:141] control plane version: v1.34.1
	I1115 10:40:59.216476  606389 api_server.go:131] duration metric: took 5.027162314s to wait for apiserver health ...
	I1115 10:40:59.216483  606389 cni.go:84] Creating CNI manager for ""
	I1115 10:40:59.216489  606389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:40:59.219993  606389 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:40:59.222879  606389 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:40:59.226982  606389 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:40:59.226993  606389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:40:59.242398  606389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:40:59.759214  606389 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:40:59.762455  606389 system_pods.go:59] 8 kube-system pods found
	I1115 10:40:59.762482  606389 system_pods.go:61] "coredns-66bc5c9577-6gcxm" [574212b5-9774-422c-bd54-eaaf565f8006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:40:59.762492  606389 system_pods.go:61] "etcd-functional-385299" [64ea789a-6412-4488-8c81-7d25a741e0c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:40:59.762497  606389 system_pods.go:61] "kindnet-5cljz" [d522cac3-1035-450e-adb8-b2822589daa0] Running
	I1115 10:40:59.762503  606389 system_pods.go:61] "kube-apiserver-functional-385299" [39558d8a-fda8-4f9b-a29e-7eb0930bdb4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:40:59.762510  606389 system_pods.go:61] "kube-controller-manager-functional-385299" [4fd426a6-436a-479e-aacf-57fae09d179a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:40:59.762515  606389 system_pods.go:61] "kube-proxy-hpnkb" [d54a42f0-c7c3-44c5-ad8c-d330e0e1a981] Running
	I1115 10:40:59.762521  606389 system_pods.go:61] "kube-scheduler-functional-385299" [54d8dace-b651-4b54-8637-63fd2255a99b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:40:59.762524  606389 system_pods.go:61] "storage-provisioner" [90f0bcb7-8b44-40de-bee1-b8485a3c1b64] Running
	I1115 10:40:59.762530  606389 system_pods.go:74] duration metric: took 3.294028ms to wait for pod list to return data ...
	I1115 10:40:59.762536  606389 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:40:59.766376  606389 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:40:59.766394  606389 node_conditions.go:123] node cpu capacity is 2
	I1115 10:40:59.766406  606389 node_conditions.go:105] duration metric: took 3.865366ms to run NodePressure ...
	I1115 10:40:59.766464  606389 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 10:41:00.034756  606389 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1115 10:41:00.043288  606389 kubeadm.go:744] kubelet initialised
	I1115 10:41:00.043302  606389 kubeadm.go:745] duration metric: took 8.532038ms waiting for restarted kubelet to initialise ...
	I1115 10:41:00.043322  606389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:41:00.067775  606389 ops.go:34] apiserver oom_adj: -16
	I1115 10:41:00.067789  606389 kubeadm.go:602] duration metric: took 11.401615147s to restartPrimaryControlPlane
	I1115 10:41:00.067798  606389 kubeadm.go:403] duration metric: took 11.450646841s to StartCluster
	I1115 10:41:00.067815  606389 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:41:00.067891  606389 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:41:00.068574  606389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:41:00.068820  606389 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:41:00.069226  606389 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:41:00.069268  606389 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:41:00.069332  606389 addons.go:70] Setting storage-provisioner=true in profile "functional-385299"
	I1115 10:41:00.069346  606389 addons.go:239] Setting addon storage-provisioner=true in "functional-385299"
	W1115 10:41:00.069351  606389 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:41:00.069374  606389 host.go:66] Checking if "functional-385299" exists ...
	I1115 10:41:00.069827  606389 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
	I1115 10:41:00.073232  606389 addons.go:70] Setting default-storageclass=true in profile "functional-385299"
	I1115 10:41:00.073262  606389 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-385299"
	I1115 10:41:00.073640  606389 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
	I1115 10:41:00.079098  606389 out.go:179] * Verifying Kubernetes components...
	I1115 10:41:00.088461  606389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:41:00.180239  606389 addons.go:239] Setting addon default-storageclass=true in "functional-385299"
	W1115 10:41:00.180252  606389 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:41:00.180278  606389 host.go:66] Checking if "functional-385299" exists ...
	I1115 10:41:00.180733  606389 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
	I1115 10:41:00.180943  606389 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:41:00.184243  606389 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:41:00.184257  606389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:41:00.184349  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:41:00.241848  606389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:41:00.254427  606389 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:41:00.254443  606389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:41:00.254522  606389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:41:00.357411  606389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:41:00.421607  606389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:41:00.531419  606389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:41:00.587413  606389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:41:01.342873  606389 node_ready.go:35] waiting up to 6m0s for node "functional-385299" to be "Ready" ...
	I1115 10:41:01.351077  606389 node_ready.go:49] node "functional-385299" is "Ready"
	I1115 10:41:01.351093  606389 node_ready.go:38] duration metric: took 8.203025ms for node "functional-385299" to be "Ready" ...
	I1115 10:41:01.351108  606389 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:41:01.351171  606389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:41:01.367891  606389 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:41:01.368406  606389 api_server.go:72] duration metric: took 1.299559233s to wait for apiserver process to appear ...
	I1115 10:41:01.368417  606389 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:41:01.368436  606389 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 10:41:01.371338  606389 addons.go:515] duration metric: took 1.302050856s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:41:01.385943  606389 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1115 10:41:01.387111  606389 api_server.go:141] control plane version: v1.34.1
	I1115 10:41:01.387127  606389 api_server.go:131] duration metric: took 18.704464ms to wait for apiserver health ...
	I1115 10:41:01.387135  606389 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:41:01.391012  606389 system_pods.go:59] 8 kube-system pods found
	I1115 10:41:01.391032  606389 system_pods.go:61] "coredns-66bc5c9577-6gcxm" [574212b5-9774-422c-bd54-eaaf565f8006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:41:01.391042  606389 system_pods.go:61] "etcd-functional-385299" [64ea789a-6412-4488-8c81-7d25a741e0c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:41:01.391046  606389 system_pods.go:61] "kindnet-5cljz" [d522cac3-1035-450e-adb8-b2822589daa0] Running
	I1115 10:41:01.391052  606389 system_pods.go:61] "kube-apiserver-functional-385299" [39558d8a-fda8-4f9b-a29e-7eb0930bdb4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:41:01.391059  606389 system_pods.go:61] "kube-controller-manager-functional-385299" [4fd426a6-436a-479e-aacf-57fae09d179a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:41:01.391063  606389 system_pods.go:61] "kube-proxy-hpnkb" [d54a42f0-c7c3-44c5-ad8c-d330e0e1a981] Running
	I1115 10:41:01.391070  606389 system_pods.go:61] "kube-scheduler-functional-385299" [54d8dace-b651-4b54-8637-63fd2255a99b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:41:01.391074  606389 system_pods.go:61] "storage-provisioner" [90f0bcb7-8b44-40de-bee1-b8485a3c1b64] Running
	I1115 10:41:01.391078  606389 system_pods.go:74] duration metric: took 3.939163ms to wait for pod list to return data ...
	I1115 10:41:01.391086  606389 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:41:01.393802  606389 default_sa.go:45] found service account: "default"
	I1115 10:41:01.393816  606389 default_sa.go:55] duration metric: took 2.725603ms for default service account to be created ...
	I1115 10:41:01.393824  606389 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:41:01.396720  606389 system_pods.go:86] 8 kube-system pods found
	I1115 10:41:01.396739  606389 system_pods.go:89] "coredns-66bc5c9577-6gcxm" [574212b5-9774-422c-bd54-eaaf565f8006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:41:01.396749  606389 system_pods.go:89] "etcd-functional-385299" [64ea789a-6412-4488-8c81-7d25a741e0c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:41:01.396753  606389 system_pods.go:89] "kindnet-5cljz" [d522cac3-1035-450e-adb8-b2822589daa0] Running
	I1115 10:41:01.396761  606389 system_pods.go:89] "kube-apiserver-functional-385299" [39558d8a-fda8-4f9b-a29e-7eb0930bdb4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:41:01.396767  606389 system_pods.go:89] "kube-controller-manager-functional-385299" [4fd426a6-436a-479e-aacf-57fae09d179a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:41:01.396770  606389 system_pods.go:89] "kube-proxy-hpnkb" [d54a42f0-c7c3-44c5-ad8c-d330e0e1a981] Running
	I1115 10:41:01.396775  606389 system_pods.go:89] "kube-scheduler-functional-385299" [54d8dace-b651-4b54-8637-63fd2255a99b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:41:01.396778  606389 system_pods.go:89] "storage-provisioner" [90f0bcb7-8b44-40de-bee1-b8485a3c1b64] Running
	I1115 10:41:01.396785  606389 system_pods.go:126] duration metric: took 2.955472ms to wait for k8s-apps to be running ...
	I1115 10:41:01.396792  606389 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:41:01.396847  606389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:41:01.412552  606389 system_svc.go:56] duration metric: took 15.748639ms WaitForService to wait for kubelet
	I1115 10:41:01.412570  606389 kubeadm.go:587] duration metric: took 1.343727289s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:41:01.412586  606389 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:41:01.415408  606389 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:41:01.415423  606389 node_conditions.go:123] node cpu capacity is 2
	I1115 10:41:01.415433  606389 node_conditions.go:105] duration metric: took 2.825658ms to run NodePressure ...
	I1115 10:41:01.415445  606389 start.go:242] waiting for startup goroutines ...
	I1115 10:41:01.415451  606389 start.go:247] waiting for cluster config update ...
	I1115 10:41:01.415461  606389 start.go:256] writing updated cluster config ...
	I1115 10:41:01.415771  606389 ssh_runner.go:195] Run: rm -f paused
	I1115 10:41:01.419685  606389 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:41:01.423247  606389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6gcxm" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:41:03.428552  606389 pod_ready.go:104] pod "coredns-66bc5c9577-6gcxm" is not "Ready", error: <nil>
	W1115 10:41:05.429606  606389 pod_ready.go:104] pod "coredns-66bc5c9577-6gcxm" is not "Ready", error: <nil>
	I1115 10:41:07.929446  606389 pod_ready.go:94] pod "coredns-66bc5c9577-6gcxm" is "Ready"
	I1115 10:41:07.929460  606389 pod_ready.go:86] duration metric: took 6.506201029s for pod "coredns-66bc5c9577-6gcxm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:07.932156  606389 pod_ready.go:83] waiting for pod "etcd-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:07.936981  606389 pod_ready.go:94] pod "etcd-functional-385299" is "Ready"
	I1115 10:41:07.936994  606389 pod_ready.go:86] duration metric: took 4.825886ms for pod "etcd-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:07.939433  606389 pod_ready.go:83] waiting for pod "kube-apiserver-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:41:09.945972  606389 pod_ready.go:104] pod "kube-apiserver-functional-385299" is not "Ready", error: <nil>
	W1115 10:41:12.446474  606389 pod_ready.go:104] pod "kube-apiserver-functional-385299" is not "Ready", error: <nil>
	I1115 10:41:12.947274  606389 pod_ready.go:94] pod "kube-apiserver-functional-385299" is "Ready"
	I1115 10:41:12.947290  606389 pod_ready.go:86] duration metric: took 5.007844154s for pod "kube-apiserver-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:12.949854  606389 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:12.954801  606389 pod_ready.go:94] pod "kube-controller-manager-functional-385299" is "Ready"
	I1115 10:41:12.954815  606389 pod_ready.go:86] duration metric: took 4.944632ms for pod "kube-controller-manager-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:12.957512  606389 pod_ready.go:83] waiting for pod "kube-proxy-hpnkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:12.962141  606389 pod_ready.go:94] pod "kube-proxy-hpnkb" is "Ready"
	I1115 10:41:12.962154  606389 pod_ready.go:86] duration metric: took 4.630544ms for pod "kube-proxy-hpnkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:12.965483  606389 pod_ready.go:83] waiting for pod "kube-scheduler-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:13.327630  606389 pod_ready.go:94] pod "kube-scheduler-functional-385299" is "Ready"
	I1115 10:41:13.327644  606389 pod_ready.go:86] duration metric: took 362.148934ms for pod "kube-scheduler-functional-385299" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:41:13.327655  606389 pod_ready.go:40] duration metric: took 11.907937905s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:41:13.392512  606389 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:41:13.395737  606389 out.go:179] * Done! kubectl is now configured to use "functional-385299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:41:49 functional-385299 crio[3553]: time="2025-11-15T10:41:49.236825774Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-wgnn9 Namespace:default ID:73c0bee50ae0e74eb9f2822c162f4e1a6173846a1e481edd118c674727ac40ac UID:cf1c8a22-ecf6-426a-9bef-a6f720db1c54 NetNS:/var/run/netns/de9e13ab-2e02-439d-a5fc-10f5f2c4ca8e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e58}] Aliases:map[]}"
	Nov 15 10:41:49 functional-385299 crio[3553]: time="2025-11-15T10:41:49.237207424Z" level=info msg="Checking pod default_hello-node-75c85bcc94-wgnn9 for CNI network kindnet (type=ptp)"
	Nov 15 10:41:49 functional-385299 crio[3553]: time="2025-11-15T10:41:49.241289186Z" level=info msg="Ran pod sandbox 73c0bee50ae0e74eb9f2822c162f4e1a6173846a1e481edd118c674727ac40ac with infra container: default/hello-node-75c85bcc94-wgnn9/POD" id=f7166bd1-8bfa-492a-ba59-713a40b2a4c8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:41:49 functional-385299 crio[3553]: time="2025-11-15T10:41:49.24478247Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=27a3c8ff-5341-4e45-af64-48601d0938b8 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.302160363Z" level=info msg="Stopping pod sandbox: 1be5a99311e4aadfd5db916357eb85896a16e478f00a77d44aab68029f2d53b2" id=baf7c1e6-f7ed-485c-bb0f-bed883143a19 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.302213467Z" level=info msg="Stopped pod sandbox (already stopped): 1be5a99311e4aadfd5db916357eb85896a16e478f00a77d44aab68029f2d53b2" id=baf7c1e6-f7ed-485c-bb0f-bed883143a19 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.303081161Z" level=info msg="Removing pod sandbox: 1be5a99311e4aadfd5db916357eb85896a16e478f00a77d44aab68029f2d53b2" id=38abfab9-c757-45cd-8bed-4988f4a1b5e6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.307340378Z" level=info msg="Removed pod sandbox: 1be5a99311e4aadfd5db916357eb85896a16e478f00a77d44aab68029f2d53b2" id=38abfab9-c757-45cd-8bed-4988f4a1b5e6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.308056833Z" level=info msg="Stopping pod sandbox: 15cdf8680dd90b4105e6ce62a0869f1632b8612096f9415359a28d4e45dd1d9e" id=0401de3d-1705-436b-95b5-e5fd88a3ed30 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.308221283Z" level=info msg="Stopped pod sandbox (already stopped): 15cdf8680dd90b4105e6ce62a0869f1632b8612096f9415359a28d4e45dd1d9e" id=0401de3d-1705-436b-95b5-e5fd88a3ed30 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.313080949Z" level=info msg="Removing pod sandbox: 15cdf8680dd90b4105e6ce62a0869f1632b8612096f9415359a28d4e45dd1d9e" id=0122b22c-7a8d-47fc-91ee-37b087b56cb2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.317569921Z" level=info msg="Removed pod sandbox: 15cdf8680dd90b4105e6ce62a0869f1632b8612096f9415359a28d4e45dd1d9e" id=0122b22c-7a8d-47fc-91ee-37b087b56cb2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.319225891Z" level=info msg="Stopping pod sandbox: 2c14f05d95d4e1ca3e607d4ab98d54e4c7e1e17a821443e83d546d824be33818" id=c4fceba5-156f-44a0-b4fb-572bcf90c9e1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.319394705Z" level=info msg="Stopped pod sandbox (already stopped): 2c14f05d95d4e1ca3e607d4ab98d54e4c7e1e17a821443e83d546d824be33818" id=c4fceba5-156f-44a0-b4fb-572bcf90c9e1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.322541901Z" level=info msg="Removing pod sandbox: 2c14f05d95d4e1ca3e607d4ab98d54e4c7e1e17a821443e83d546d824be33818" id=20809b6f-4085-4e67-adbf-2a0edaecd8d0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 10:41:53 functional-385299 crio[3553]: time="2025-11-15T10:41:53.327491554Z" level=info msg="Removed pod sandbox: 2c14f05d95d4e1ca3e607d4ab98d54e4c7e1e17a821443e83d546d824be33818" id=20809b6f-4085-4e67-adbf-2a0edaecd8d0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 10:42:01 functional-385299 crio[3553]: time="2025-11-15T10:42:01.190070832Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=22d1d611-f55e-4cd0-8277-94807ab981a4 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:42:10 functional-385299 crio[3553]: time="2025-11-15T10:42:10.18981238Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0f7aff94-5be7-4bfe-a5fd-5d8240123706 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:42:23 functional-385299 crio[3553]: time="2025-11-15T10:42:23.192144486Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=05f4f758-14b1-475d-903c-4b401bf13b1c name=/runtime.v1.ImageService/PullImage
	Nov 15 10:42:51 functional-385299 crio[3553]: time="2025-11-15T10:42:51.189531656Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a3e9d57b-2dfe-42fb-9a31-66e75ce6edaf name=/runtime.v1.ImageService/PullImage
	Nov 15 10:43:11 functional-385299 crio[3553]: time="2025-11-15T10:43:11.191566085Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6ba2cee5-b9ef-408e-918d-7fdd3c722a0d name=/runtime.v1.ImageService/PullImage
	Nov 15 10:44:23 functional-385299 crio[3553]: time="2025-11-15T10:44:23.191065361Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=34e628d2-cae7-4d1c-8fa8-7b50cfae8d6d name=/runtime.v1.ImageService/PullImage
	Nov 15 10:44:43 functional-385299 crio[3553]: time="2025-11-15T10:44:43.190395989Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=95d0f66f-d2db-48d0-b9d5-e36319fe18dd name=/runtime.v1.ImageService/PullImage
	Nov 15 10:47:15 functional-385299 crio[3553]: time="2025-11-15T10:47:15.189786031Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=75c0ff99-1355-4030-a072-5c04dd09e394 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:47:34 functional-385299 crio[3553]: time="2025-11-15T10:47:34.189647934Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=20a5bf4c-4d4a-4a3b-8efc-886931a2efa6 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e3a4591cd59fc       docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33   9 minutes ago       Running             myfrontend                0                   1a6be8111d52c       sp-pod                                      default
	5ec22bbc0e90c       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   4d116c798d177       nginx-svc                                   default
	fcfce77775b61       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   1c94f08f9489c       kindnet-5cljz                               kube-system
	a223985c5d2d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   1535d55cd31ff       storage-provisioner                         kube-system
	cdacc45fd350d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   06b6ceb7196b1       kube-proxy-hpnkb                            kube-system
	490effec6d7f4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   05a8544688fbc       coredns-66bc5c9577-6gcxm                    kube-system
	c228d1cfad4be       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   e26829fd7b99f       kube-apiserver-functional-385299            kube-system
	4766cb577b24f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   0dd5b37cf5ef2       kube-scheduler-functional-385299            kube-system
	58cb9386c098d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   94d62131c9aa7       kube-controller-manager-functional-385299   kube-system
	1f5d6b6c0ddd3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   d606cd0bd6ff5       etcd-functional-385299                      kube-system
	0fcfd68b3ba23       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   1535d55cd31ff       storage-provisioner                         kube-system
	021c29353d364       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   94d62131c9aa7       kube-controller-manager-functional-385299   kube-system
	d8695fd47fae3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   0dd5b37cf5ef2       kube-scheduler-functional-385299            kube-system
	d926cb43d3437       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   1c94f08f9489c       kindnet-5cljz                               kube-system
	aed3921d45665       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   06b6ceb7196b1       kube-proxy-hpnkb                            kube-system
	7013fd9da8847       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   05a8544688fbc       coredns-66bc5c9577-6gcxm                    kube-system
	ad0822d694756       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   d606cd0bd6ff5       etcd-functional-385299                      kube-system
	
	
	==> coredns [490effec6d7f4f07176158f22ceea3579935d52a56b6f82890fce4a9bbf48184] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34753 - 39526 "HINFO IN 2865535533600822076.2390956059437909556. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.063076371s
	
	
	==> coredns [7013fd9da8847c8882227855fc966353151c05a8b2342bb04fc720f3ae77fbb1] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53115 - 2509 "HINFO IN 6399643322975272353.5196792238447154668. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013733575s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-385299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-385299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=functional-385299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_39_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-385299
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:51:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:50:48 +0000   Sat, 15 Nov 2025 10:39:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:50:48 +0000   Sat, 15 Nov 2025 10:39:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:50:48 +0000   Sat, 15 Nov 2025 10:39:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:50:48 +0000   Sat, 15 Nov 2025 10:39:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-385299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                221adedf-a55d-4aa0-a9dc-95412c133181
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wgnn9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-njwzv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-6gcxm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-385299                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-5cljz                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-385299             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-385299    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-hpnkb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-385299             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-385299 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-385299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-385299 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-385299 event: Registered Node functional-385299 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-385299 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-385299 event: Registered Node functional-385299 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-385299 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-385299 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-385299 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-385299 event: Registered Node functional-385299 in Controller
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[Nov15 10:39] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1f5d6b6c0ddd313d98911804d4b28d71e239f7bbc02d17d6add7584d3d3bbdee] <==
	{"level":"warn","ts":"2025-11-15T10:40:56.233209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.242231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.253953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.270962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.288277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.308756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.331667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.351033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.381125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.391439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.412986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.426925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.444524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.461591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.499186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.501435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.558239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.589675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.618369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.634407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.654481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:56.729677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54148","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:50:55.102150Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1096}
	{"level":"info","ts":"2025-11-15T10:50:55.126748Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1096,"took":"24.19881ms","hash":2488457231,"current-db-size-bytes":3174400,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1355776,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-15T10:50:55.126821Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2488457231,"revision":1096,"compact-revision":-1}
	
	
	==> etcd [ad0822d6947568c8356741dc1e4d310d4496322207c5de1485b46a7cc3d242ea] <==
	{"level":"warn","ts":"2025-11-15T10:40:10.505764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:10.516387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:10.541331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:10.581084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:10.595368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:10.613453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:40:10.753352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48798","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:40:33.822720Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T10:40:33.822782Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-385299","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-15T10:40:33.822900Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:40:33.959007Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:40:33.959086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:40:33.959109Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-15T10:40:33.959221Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-15T10:40:33.959234Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T10:40:33.959491Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:40:33.959543Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:40:33.959552Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T10:40:33.959590Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:40:33.959599Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:40:33.959615Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:40:33.963130Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-15T10:40:33.963253Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:40:33.963311Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-15T10:40:33.963343Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-385299","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:51:33 up  2:34,  0 user,  load average: 0.40, 0.44, 1.47
	Linux functional-385299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d926cb43d3437f31e9e8cab626660c7b0cbb805af9685e900e12eceeafb3c6c6] <==
	I1115 10:40:06.607359       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:40:06.610600       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1115 10:40:06.610756       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:40:06.610769       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:40:06.610783       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:40:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:40:07.102119       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:40:07.109051       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:40:07.109080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:40:07.109241       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:40:12.011588       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:40:12.011627       1 metrics.go:72] Registering metrics
	I1115 10:40:12.011721       1 controller.go:711] "Syncing nftables rules"
	I1115 10:40:17.097387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:40:17.097446       1 main.go:301] handling current node
	I1115 10:40:27.098508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:40:27.098540       1 main.go:301] handling current node
	
	
	==> kindnet [fcfce77775b6108c4102a39333894766128198231a38fab9ed2e7777f32ee482] <==
	I1115 10:49:28.904495       1 main.go:301] handling current node
	I1115 10:49:38.896663       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:49:38.896726       1 main.go:301] handling current node
	I1115 10:49:48.896578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:49:48.896613       1 main.go:301] handling current node
	I1115 10:49:58.899942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:49:58.899979       1 main.go:301] handling current node
	I1115 10:50:08.896568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:50:08.896602       1 main.go:301] handling current node
	I1115 10:50:18.896538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:50:18.896670       1 main.go:301] handling current node
	I1115 10:50:28.902461       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:50:28.902497       1 main.go:301] handling current node
	I1115 10:50:38.896337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:50:38.896488       1 main.go:301] handling current node
	I1115 10:50:48.896460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:50:48.896522       1 main.go:301] handling current node
	I1115 10:50:58.901631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:50:58.901741       1 main.go:301] handling current node
	I1115 10:51:08.896617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:51:08.896652       1 main.go:301] handling current node
	I1115 10:51:18.896610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:51:18.896648       1 main.go:301] handling current node
	I1115 10:51:28.900978       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 10:51:28.901083       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c228d1cfad4be98cf069e29e91403ef15bd74d45a375728fbf7f782db7a07f60] <==
	I1115 10:40:57.637280       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:40:57.640535       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:40:57.640593       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:40:57.640612       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:40:57.640620       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:40:57.640626       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:40:57.643994       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:40:57.644127       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:40:57.644161       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1115 10:40:57.653859       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:40:58.137604       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:40:58.343581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:40:59.751703       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:40:59.874984       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:40:59.946513       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:40:59.957103       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:41:00.925299       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:41:01.214682       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:41:01.267529       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:41:16.789721       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.128.50"}
	I1115 10:41:22.746300       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.139.247"}
	I1115 10:41:31.501749       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.210.96"}
	E1115 10:41:41.488064       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1115 10:41:48.982354       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.59.125"}
	I1115 10:50:57.562355       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [021c29353d3641886fbee2dfe62a13165755c20d5b0ed83dbe2ee0c6306360c2] <==
	I1115 10:40:15.210033       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:40:15.210098       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:40:15.213736       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:40:15.215700       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:40:15.218141       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:40:15.220463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:40:15.221583       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:40:15.224072       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:40:15.230402       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:40:15.233667       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:40:15.239078       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:40:15.239261       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:40:15.239321       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:40:15.239358       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:40:15.239389       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:40:15.245401       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:40:15.249577       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:40:15.250993       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:40:15.250996       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:40:15.251011       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:40:15.251027       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:40:15.258234       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:40:15.260529       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:40:15.260749       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:40:15.268922       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-controller-manager [58cb9386c098dd9e12326ed78c49dda15f9a47e8c500d48a600124e09910a91b] <==
	I1115 10:41:00.917550       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:41:00.919202       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:41:00.922034       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:41:00.924505       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:41:00.926726       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:41:00.928465       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:41:00.931421       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:41:00.932613       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:41:00.942895       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:41:00.948153       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:41:00.952661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:41:00.954990       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:41:00.956282       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:41:00.956689       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:41:00.956766       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:41:00.956933       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:41:00.958220       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:41:00.958289       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:41:00.959574       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:41:00.962862       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:41:00.967064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:41:00.967215       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:41:00.967251       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:41:00.967260       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:41:00.973057       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [aed3921d456659a0bc57cf7b2204ee7d3f2cccfee83ba7b25d91878603b28629] <==
	I1115 10:40:06.500851       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:40:08.965904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:40:12.093221       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:40:12.094193       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 10:40:12.094326       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:40:12.329112       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:40:12.329252       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:40:12.334802       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:40:12.335158       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:40:12.335334       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:40:12.357238       1 config.go:200] "Starting service config controller"
	I1115 10:40:12.363161       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:40:12.363215       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:40:12.363221       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:40:12.363234       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:40:12.363238       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:40:12.363987       1 config.go:309] "Starting node config controller"
	I1115 10:40:12.363996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:40:12.364002       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:40:12.471699       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:40:12.471743       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:40:12.471768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [cdacc45fd350d357b70f07b71cf8a2b5ae1dc817603e1808f2b97460e9048481] <==
	I1115 10:40:58.651784       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:40:58.778637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:40:58.879602       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:40:58.879729       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 10:40:58.879874       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:40:59.092529       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:40:59.092592       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:40:59.097318       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:40:59.097628       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:40:59.097650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:40:59.099588       1 config.go:309] "Starting node config controller"
	I1115 10:40:59.099608       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:40:59.099622       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:40:59.100066       1 config.go:200] "Starting service config controller"
	I1115 10:40:59.100082       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:40:59.100098       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:40:59.100102       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:40:59.100113       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:40:59.100117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:40:59.201090       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:40:59.201210       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:40:59.201294       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4766cb577b24fee4e9e42759fef96cad195c2692a60edd626959c6f2cdc79016] <==
	I1115 10:40:56.241662       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:40:57.461279       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:40:57.461402       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:40:57.461440       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:40:57.461490       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:40:57.588613       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:40:57.588720       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:40:57.591509       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:40:57.591786       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:40:57.591843       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:40:57.591885       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:40:57.692580       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d8695fd47fae31e29eb3b6a92c174af74b33e1c71a5f3999010d9a49685f8566] <==
	I1115 10:40:09.900228       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:40:13.108731       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:40:13.108844       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:40:13.114277       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:40:13.114326       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:40:13.114365       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:40:13.114373       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:40:13.114388       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:40:13.114409       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:40:13.114903       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:40:13.115061       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:40:13.214934       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:40:13.214949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:40:13.214991       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:40:33.825353       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 10:40:33.825454       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 10:40:33.825466       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 10:40:33.825486       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:40:33.825506       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:40:33.825525       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1115 10:40:33.825783       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 10:40:33.825812       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 15 10:49:00 functional-385299 kubelet[3862]: E1115 10:49:00.189338    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:49:07 functional-385299 kubelet[3862]: E1115 10:49:07.189638    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:49:14 functional-385299 kubelet[3862]: E1115 10:49:14.189129    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:49:21 functional-385299 kubelet[3862]: E1115 10:49:21.189411    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:49:25 functional-385299 kubelet[3862]: E1115 10:49:25.189355    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:49:32 functional-385299 kubelet[3862]: E1115 10:49:32.189424    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:49:37 functional-385299 kubelet[3862]: E1115 10:49:37.189191    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:49:44 functional-385299 kubelet[3862]: E1115 10:49:44.189026    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:49:48 functional-385299 kubelet[3862]: E1115 10:49:48.189473    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:49:59 functional-385299 kubelet[3862]: E1115 10:49:59.189341    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:50:00 functional-385299 kubelet[3862]: E1115 10:50:00.189842    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:50:11 functional-385299 kubelet[3862]: E1115 10:50:11.189345    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:50:14 functional-385299 kubelet[3862]: E1115 10:50:14.189418    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:50:22 functional-385299 kubelet[3862]: E1115 10:50:22.189074    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:50:26 functional-385299 kubelet[3862]: E1115 10:50:26.188973    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:50:37 functional-385299 kubelet[3862]: E1115 10:50:37.189343    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:50:41 functional-385299 kubelet[3862]: E1115 10:50:41.189505    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:50:51 functional-385299 kubelet[3862]: E1115 10:50:51.189691    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:50:54 functional-385299 kubelet[3862]: E1115 10:50:54.189483    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:51:04 functional-385299 kubelet[3862]: E1115 10:51:04.189328    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:51:05 functional-385299 kubelet[3862]: E1115 10:51:05.189938    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:51:16 functional-385299 kubelet[3862]: E1115 10:51:16.188840    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	Nov 15 10:51:16 functional-385299 kubelet[3862]: E1115 10:51:16.189367    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:51:28 functional-385299 kubelet[3862]: E1115 10:51:28.189801    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wgnn9" podUID="cf1c8a22-ecf6-426a-9bef-a6f720db1c54"
	Nov 15 10:51:28 functional-385299 kubelet[3862]: E1115 10:51:28.190326    3862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-njwzv" podUID="7a2561d5-06a6-4228-a9dc-bc9cdf6c8fce"
	
	
	==> storage-provisioner [0fcfd68b3ba239953582173417e7e6f77211022e7a6f520bf701736bdcc60dac] <==
	I1115 10:40:18.979580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:40:18.993343       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:40:18.993426       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:40:18.995996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:40:22.450904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:40:26.710817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:40:30.309261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:40:33.363175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a223985c5d2d5f3eefe058655762ce4131f7402b256832246df29c681db71ea2] <==
	W1115 10:51:08.711126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:10.714880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:10.719655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:12.722827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:12.729425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:14.732430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:14.736583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:16.740277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:16.745418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:18.748610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:18.755235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:20.758876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:20.763133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:22.766574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:22.771083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:24.774025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:24.778475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:26.781333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:26.788112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:28.791123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:28.795926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:30.798648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:30.803619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:32.806675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:51:32.813401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-385299 -n functional-385299
helpers_test.go:269: (dbg) Run:  kubectl --context functional-385299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-wgnn9 hello-node-connect-7d85dfc575-njwzv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-385299 describe pod hello-node-75c85bcc94-wgnn9 hello-node-connect-7d85dfc575-njwzv
helpers_test.go:290: (dbg) kubectl --context functional-385299 describe pod hello-node-75c85bcc94-wgnn9 hello-node-connect-7d85dfc575-njwzv:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-wgnn9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-385299/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 10:41:48 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-45qnm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-45qnm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m45s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wgnn9 to functional-385299
	  Normal   Pulling    6m51s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m51s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m51s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m37s (x21 over 9m45s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m37s (x21 over 9m45s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-njwzv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-385299/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 10:41:31 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ghfsv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ghfsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-njwzv to functional-385299
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m48s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-385299 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-385299 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wgnn9" [cf1c8a22-ecf6-426a-9bef-a6f720db1c54] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1115 10:42:03.994995  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:20.129786  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:47.836423  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:49:20.130010  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-385299 -n functional-385299
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-15 10:51:49.426001806 +0000 UTC m=+1225.784227615
functional_test.go:1460: (dbg) Run:  kubectl --context functional-385299 describe po hello-node-75c85bcc94-wgnn9 -n default
functional_test.go:1460: (dbg) kubectl --context functional-385299 describe po hello-node-75c85bcc94-wgnn9 -n default:
Name:             hello-node-75c85bcc94-wgnn9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-385299/192.168.49.2
Start Time:       Sat, 15 Nov 2025 10:41:48 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-45qnm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-45qnm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wgnn9 to functional-385299
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-385299 logs hello-node-75c85bcc94-wgnn9 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-385299 logs hello-node-75c85bcc94-wgnn9 -n default: exit status 1 (125.343917ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-wgnn9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-385299 logs hello-node-75c85bcc94-wgnn9 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 service --namespace=default --https --url hello-node: exit status 115 (509.792462ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30279
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-385299 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 service hello-node --url --format={{.IP}}: exit status 115 (447.157698ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-385299 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 service hello-node --url: exit status 115 (525.758381ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30279
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-385299 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30279
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image load --daemon kicbase/echo-server:functional-385299 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-385299" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image load --daemon kicbase/echo-server:functional-385299 --alsologtostderr
2025/11/15 10:51:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 image load --daemon kicbase/echo-server:functional-385299 --alsologtostderr: (1.139529039s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-385299" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-385299
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image load --daemon kicbase/echo-server:functional-385299 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-385299" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image save kicbase/echo-server:functional-385299 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1115 10:52:03.461763  614729 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:52:03.461938  614729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:52:03.461948  614729 out.go:374] Setting ErrFile to fd 2...
	I1115 10:52:03.461952  614729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:52:03.462217  614729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:52:03.463342  614729 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:03.463462  614729 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:03.463970  614729 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
	I1115 10:52:03.485071  614729 ssh_runner.go:195] Run: systemctl --version
	I1115 10:52:03.485138  614729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
	I1115 10:52:03.503248  614729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
	I1115 10:52:03.611827  614729 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1115 10:52:03.611910  614729 cache_images.go:255] Failed to load cached images for "functional-385299": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1115 10:52:03.611936  614729 cache_images.go:267] failed pushing to: functional-385299

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-385299
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image save --daemon kicbase/echo-server:functional-385299 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-385299
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-385299: exit status 1 (27.080156ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-385299

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-385299

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (521.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 node start m02 --alsologtostderr -v 5
E1115 10:59:06.235124  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:59:20.135329  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:01:22.372617  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:01:50.077567  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:04:20.130222  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 node start m02 --alsologtostderr -v 5: exit status 80 (7m41.671438504s)

                                                
                                                
-- stdout --
	* Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:58:10.442228  630419 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:58:10.443660  630419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:58:10.443681  630419 out.go:374] Setting ErrFile to fd 2...
	I1115 10:58:10.443687  630419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:58:10.443969  630419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:58:10.444289  630419 mustload.go:66] Loading cluster: ha-439113
	I1115 10:58:10.444712  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:58:10.445225  630419 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	W1115 10:58:10.462954  630419 host.go:58] "ha-439113-m02" host status: Stopped
	I1115 10:58:10.465979  630419 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	I1115 10:58:10.468661  630419 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:58:10.471514  630419 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:58:10.474600  630419 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:58:10.474654  630419 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:58:10.474674  630419 cache.go:65] Caching tarball of preloaded images
	I1115 10:58:10.474704  630419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:58:10.474775  630419 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:58:10.474786  630419 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:58:10.474930  630419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:58:10.495941  630419 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:58:10.495965  630419 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:58:10.495983  630419 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:58:10.496009  630419 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:58:10.496149  630419 start.go:364] duration metric: took 57.256µs to acquireMachinesLock for "ha-439113-m02"
	I1115 10:58:10.496175  630419 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:58:10.496185  630419 fix.go:54] fixHost starting: m02
	I1115 10:58:10.496458  630419 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:58:10.514409  630419 fix.go:112] recreateIfNeeded on ha-439113-m02: state=Stopped err=<nil>
	W1115 10:58:10.514446  630419 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:58:10.517549  630419 out.go:252] * Restarting existing docker container for "ha-439113-m02" ...
	I1115 10:58:10.517660  630419 cli_runner.go:164] Run: docker start ha-439113-m02
	I1115 10:58:10.801164  630419 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:58:10.821296  630419 kic.go:430] container "ha-439113-m02" state is running.
	I1115 10:58:10.821699  630419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:58:10.847154  630419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:58:10.847406  630419 machine.go:94] provisionDockerMachine start ...
	I1115 10:58:10.847486  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:10.877536  630419 main.go:143] libmachine: Using SSH client type: native
	I1115 10:58:10.877864  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I1115 10:58:10.877888  630419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:58:10.878456  630419 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40596->127.0.0.1:33544: read: connection reset by peer
	I1115 10:58:14.108575  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 10:58:14.108606  630419 ubuntu.go:182] provisioning hostname "ha-439113-m02"
	I1115 10:58:14.108680  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:14.149049  630419 main.go:143] libmachine: Using SSH client type: native
	I1115 10:58:14.149356  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I1115 10:58:14.149374  630419 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
	I1115 10:58:14.406256  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 10:58:14.406342  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:14.443851  630419 main.go:143] libmachine: Using SSH client type: native
	I1115 10:58:14.444186  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I1115 10:58:14.444216  630419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:58:14.649825  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:58:14.649868  630419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:58:14.649900  630419 ubuntu.go:190] setting up certificates
	I1115 10:58:14.649927  630419 provision.go:84] configureAuth start
	I1115 10:58:14.649990  630419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:58:14.682102  630419 provision.go:143] copyHostCerts
	I1115 10:58:14.682146  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:58:14.682207  630419 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:58:14.682225  630419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:58:14.682322  630419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:58:14.682405  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:58:14.682426  630419 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:58:14.682445  630419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:58:14.682477  630419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:58:14.682521  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:58:14.682536  630419 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:58:14.682540  630419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:58:14.682564  630419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:58:14.682608  630419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
	I1115 10:58:15.647535  630419 provision.go:177] copyRemoteCerts
	I1115 10:58:15.647603  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:58:15.647650  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:15.667452  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:58:15.778694  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 10:58:15.778759  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:58:15.825160  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 10:58:15.825224  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:58:15.866553  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 10:58:15.866618  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:58:15.891284  630419 provision.go:87] duration metric: took 1.241331965s to configureAuth
	I1115 10:58:15.891312  630419 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:58:15.891547  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:58:15.891665  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:15.918280  630419 main.go:143] libmachine: Using SSH client type: native
	I1115 10:58:15.918599  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
	I1115 10:58:15.918614  630419 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:58:17.386429  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:58:17.386472  630419 machine.go:97] duration metric: took 6.53904776s to provisionDockerMachine
	I1115 10:58:17.386483  630419 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
	I1115 10:58:17.386493  630419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:58:17.386587  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:58:17.386639  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:17.404287  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:58:17.512850  630419 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:58:17.516363  630419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:58:17.516394  630419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:58:17.516407  630419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:58:17.516470  630419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:58:17.516555  630419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:58:17.516567  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 10:58:17.516666  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:58:17.524150  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:58:17.548278  630419 start.go:296] duration metric: took 161.779844ms for postStartSetup
	I1115 10:58:17.548371  630419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:58:17.548410  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:17.566897  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:58:17.674627  630419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:58:17.679639  630419 fix.go:56] duration metric: took 7.183447348s for fixHost
	I1115 10:58:17.679660  630419 start.go:83] releasing machines lock for "ha-439113-m02", held for 7.183496432s
	I1115 10:58:17.679740  630419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:58:17.698090  630419 ssh_runner.go:195] Run: systemctl --version
	I1115 10:58:17.698141  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:17.698377  630419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:58:17.698444  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:58:17.726184  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:58:17.738444  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:58:17.855433  630419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:58:17.968607  630419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:58:17.975680  630419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:58:17.975754  630419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:58:17.990032  630419 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:58:17.990066  630419 start.go:496] detecting cgroup driver to use...
	I1115 10:58:17.990100  630419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:58:17.990164  630419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:58:18.016078  630419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:58:18.039244  630419 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:58:18.039340  630419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:58:18.066497  630419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:58:18.095457  630419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:58:18.402520  630419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:58:18.638606  630419 docker.go:234] disabling docker service ...
	I1115 10:58:18.638714  630419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:58:18.666005  630419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:58:18.687038  630419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:58:18.946716  630419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:58:19.201781  630419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:58:19.223206  630419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:58:19.247329  630419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:58:19.247445  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:58:19.260220  630419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:58:19.260330  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:58:19.277311  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:58:19.295270  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:58:19.309034  630419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:58:19.325882  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:58:19.341268  630419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:58:19.363375  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:58:19.378202  630419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:58:19.389364  630419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:58:19.405399  630419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:58:19.624721  630419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:59:49.858069  630419 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.233245048s)
	I1115 10:59:49.858112  630419 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:59:49.858184  630419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:59:49.863569  630419 start.go:564] Will wait 60s for crictl version
	I1115 10:59:49.863630  630419 ssh_runner.go:195] Run: which crictl
	I1115 10:59:49.867776  630419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:59:49.898622  630419 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:59:49.898716  630419 ssh_runner.go:195] Run: crio --version
	I1115 10:59:49.928553  630419 ssh_runner.go:195] Run: crio --version
	I1115 10:59:49.962124  630419 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:59:49.965145  630419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:59:50.036504  630419 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 10:59:50.025473673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:59:50.036663  630419 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:59:50.059575  630419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:59:50.065444  630419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:59:50.075650  630419 mustload.go:66] Loading cluster: ha-439113
	I1115 10:59:50.075898  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:59:50.076172  630419 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:59:50.099180  630419 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:59:50.099495  630419 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
	I1115 10:59:50.099512  630419 certs.go:195] generating shared ca certs ...
	I1115 10:59:50.099537  630419 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:59:50.099727  630419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:59:50.099779  630419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:59:50.099791  630419 certs.go:257] generating profile certs ...
	I1115 10:59:50.099880  630419 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 10:59:50.099914  630419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938
	I1115 10:59:50.099935  630419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1115 10:59:50.512899  630419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938 ...
	I1115 10:59:50.512934  630419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938: {Name:mk29ea75749cc8a74d3bccc2390d33fcbf97f44c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:59:50.513140  630419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938 ...
	I1115 10:59:50.513157  630419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938: {Name:mk813a8a2137a06fa128f13a44cc1d9be451a1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:59:50.513249  630419 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 10:59:50.513394  630419 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 10:59:50.513574  630419 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 10:59:50.513592  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 10:59:50.513609  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 10:59:50.513626  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 10:59:50.513646  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 10:59:50.513662  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 10:59:50.513673  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 10:59:50.513688  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 10:59:50.513699  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 10:59:50.513752  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:59:50.513784  630419 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:59:50.513792  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:59:50.513818  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:59:50.513840  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:59:50.513870  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:59:50.513918  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:59:50.513955  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 10:59:50.513972  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:59:50.513987  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 10:59:50.514044  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:59:50.531342  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:59:50.629240  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 10:59:50.633353  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 10:59:50.641938  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 10:59:50.645800  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 10:59:50.654801  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 10:59:50.658702  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 10:59:50.666910  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 10:59:50.671335  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 10:59:50.680198  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 10:59:50.683971  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 10:59:50.692941  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 10:59:50.696834  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 10:59:50.706131  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:59:50.726772  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:59:50.747008  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:59:50.767273  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:59:50.797811  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1115 10:59:50.819892  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:59:50.845472  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:59:50.867121  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:59:50.893240  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:59:50.917661  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:59:50.937259  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:59:50.967384  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 10:59:50.985679  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 10:59:51.004975  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 10:59:51.026780  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 10:59:51.042632  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 10:59:51.060195  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 10:59:51.075842  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 10:59:51.090338  630419 ssh_runner.go:195] Run: openssl version
	I1115 10:59:51.097485  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:59:51.107651  630419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:59:51.112414  630419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:59:51.112496  630419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:59:51.155634  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:59:51.164691  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:59:51.176463  630419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:59:51.180851  630419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:59:51.181101  630419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:59:51.227059  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:59:51.235146  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:59:51.243562  630419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:59:51.247373  630419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:59:51.247443  630419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:59:51.291544  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:59:51.300840  630419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:59:51.312408  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:59:51.361121  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:59:51.404973  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:59:51.457427  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:59:51.499348  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:59:51.542528  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:59:51.586557  630419 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 10:59:51.586707  630419 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:59:51.586742  630419 kube-vip.go:115] generating kube-vip config ...
	I1115 10:59:51.586802  630419 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 10:59:51.600019  630419 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:59:51.600139  630419 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 10:59:51.600219  630419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:59:51.608017  630419 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:59:51.608105  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 10:59:51.616100  630419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 10:59:51.630054  630419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:59:51.643721  630419 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 10:59:51.660541  630419 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 10:59:51.664639  630419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:59:51.675728  630419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:59:51.824993  630419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:59:51.845485  630419 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:59:51.845643  630419 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:59:51.845845  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:59:51.849497  630419 out.go:179] * Enabled addons: 
	I1115 10:59:51.849643  630419 out.go:179] * Verifying Kubernetes components...
	I1115 10:59:51.852388  630419 addons.go:515] duration metric: took 6.748705ms for enable addons: enabled=[]
	I1115 10:59:51.852486  630419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:59:51.993603  630419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:59:52.024172  630419 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 10:59:52.024338  630419 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 10:59:52.024946  630419 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 10:59:52.024848  630419 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:59:52.025051  630419 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:59:52.025072  630419 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:59:52.025117  630419 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:59:52.025142  630419 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:59:52.025428  630419 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
	W1115 10:59:54.029461  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 10:59:56.029761  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 10:59:58.030327  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:00.032832  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:02.047014  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:04.529043  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:06.529808  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:08.530004  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:10.530504  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:13.029959  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:15.038101  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:17.530159  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:20.030591  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:22.529235  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:24.529427  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:27.029215  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:29.029427  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:31.031034  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:33.530158  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:36.030394  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:38.030596  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:40.040209  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:42.531199  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:45.044147  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:47.529747  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:50.030088  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:52.530633  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:55.030884  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:57.528806  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:00:59.529391  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:01.530680  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:04.029508  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:06.030518  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:08.529689  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:11.029734  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:13.528847  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:15.531784  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:17.539135  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:20.029710  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:22.029950  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:24.529572  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:26.529619  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:29.029700  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:31.029990  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:33.030872  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:35.529614  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:37.529942  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:39.530532  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:41.540178  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:44.030478  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:46.529728  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:49.029294  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:51.029706  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:53.529186  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:55.529895  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:01:57.532148  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:00.044323  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:02.529302  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:04.529802  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:07.029251  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:09.529128  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:11.529171  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:13.538771  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:16.029155  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:18.029372  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:20.030366  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:22.530628  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:25.029889  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:27.530268  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:29.530595  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:32.030480  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:34.528848  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:36.530189  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:39.029462  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:41.029707  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:43.529738  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:45.532662  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:48.029798  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:50.529695  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:53.029388  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:55.029642  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:57.530323  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:02:59.531555  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:02.029372  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:04.029682  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:06.030044  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:08.030238  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:10.030682  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:12.531707  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:15.033328  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:17.529568  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:20.030975  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:22.529899  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:25.029587  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:27.528935  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:29.530147  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:32.029638  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:34.034829  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:36.529487  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:38.531875  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:41.030254  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:43.529140  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:45.529852  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:48.029447  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:50.029779  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:52.528797  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:54.529151  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:56.529562  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:03:59.029176  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:01.535346  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:04.029490  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:06.529168  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:09.029311  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:11.030408  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:13.032486  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:15.036911  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:17.528952  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:19.529621  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:21.530055  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:24.029949  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:26.529957  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:29.029141  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:31.529598  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:34.029208  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:36.029668  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:38.030014  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:40.041175  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:42.529325  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:44.529838  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:47.029424  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:49.029873  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:51.528695  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:53.530114  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:56.029622  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:04:58.029671  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:00.029832  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:02.030114  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:04.528677  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:07.029598  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:09.029676  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:11.529207  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:13.536090  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:16.030784  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:18.529178  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:20.529503  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:23.028730  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:25.031438  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:27.529265  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:29.529967  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:32.029634  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:34.528937  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:36.529312  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:38.529907  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:41.029854  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:43.529586  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:45.539532  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:48.029029  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	W1115 11:05:50.030447  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
	I1115 11:05:52.026364  630419 node_ready.go:38] duration metric: took 6m0.000784363s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:05:52.029829  630419 out.go:203] 
	W1115 11:05:52.032652  630419 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1115 11:05:52.032680  630419 out.go:285] * 
	* 
	W1115 11:05:52.038824  630419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 11:05:52.041971  630419 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1115 10:58:10.442228  630419 out.go:360] Setting OutFile to fd 1 ...
I1115 10:58:10.443660  630419 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:58:10.443681  630419 out.go:374] Setting ErrFile to fd 2...
I1115 10:58:10.443687  630419 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:58:10.443969  630419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
I1115 10:58:10.444289  630419 mustload.go:66] Loading cluster: ha-439113
I1115 10:58:10.444712  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:58:10.445225  630419 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
W1115 10:58:10.462954  630419 host.go:58] "ha-439113-m02" host status: Stopped
I1115 10:58:10.465979  630419 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
I1115 10:58:10.468661  630419 cache.go:134] Beginning downloading kic base image for docker with crio
I1115 10:58:10.471514  630419 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
I1115 10:58:10.474600  630419 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 10:58:10.474654  630419 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
I1115 10:58:10.474674  630419 cache.go:65] Caching tarball of preloaded images
I1115 10:58:10.474704  630419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
I1115 10:58:10.474775  630419 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
I1115 10:58:10.474786  630419 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1115 10:58:10.474930  630419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
I1115 10:58:10.495941  630419 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
I1115 10:58:10.495965  630419 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
I1115 10:58:10.495983  630419 cache.go:243] Successfully downloaded all kic artifacts
I1115 10:58:10.496009  630419 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1115 10:58:10.496149  630419 start.go:364] duration metric: took 57.256µs to acquireMachinesLock for "ha-439113-m02"
I1115 10:58:10.496175  630419 start.go:96] Skipping create...Using existing machine configuration
I1115 10:58:10.496185  630419 fix.go:54] fixHost starting: m02
I1115 10:58:10.496458  630419 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
I1115 10:58:10.514409  630419 fix.go:112] recreateIfNeeded on ha-439113-m02: state=Stopped err=<nil>
W1115 10:58:10.514446  630419 fix.go:138] unexpected machine state, will restart: <nil>
I1115 10:58:10.517549  630419 out.go:252] * Restarting existing docker container for "ha-439113-m02" ...
I1115 10:58:10.517660  630419 cli_runner.go:164] Run: docker start ha-439113-m02
I1115 10:58:10.801164  630419 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
I1115 10:58:10.821296  630419 kic.go:430] container "ha-439113-m02" state is running.
I1115 10:58:10.821699  630419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
I1115 10:58:10.847154  630419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
I1115 10:58:10.847406  630419 machine.go:94] provisionDockerMachine start ...
I1115 10:58:10.847486  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:10.877536  630419 main.go:143] libmachine: Using SSH client type: native
I1115 10:58:10.877864  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
I1115 10:58:10.877888  630419 main.go:143] libmachine: About to run SSH command:
hostname
I1115 10:58:10.878456  630419 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40596->127.0.0.1:33544: read: connection reset by peer
I1115 10:58:14.108575  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02

                                                
                                                
I1115 10:58:14.108606  630419 ubuntu.go:182] provisioning hostname "ha-439113-m02"
I1115 10:58:14.108680  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:14.149049  630419 main.go:143] libmachine: Using SSH client type: native
I1115 10:58:14.149356  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
I1115 10:58:14.149374  630419 main.go:143] libmachine: About to run SSH command:
sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
I1115 10:58:14.406256  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02

                                                
                                                
I1115 10:58:14.406342  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:14.443851  630419 main.go:143] libmachine: Using SSH client type: native
I1115 10:58:14.444186  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
I1115 10:58:14.444216  630419 main.go:143] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I1115 10:58:14.649825  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
I1115 10:58:14.649868  630419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
I1115 10:58:14.649900  630419 ubuntu.go:190] setting up certificates
I1115 10:58:14.649927  630419 provision.go:84] configureAuth start
I1115 10:58:14.649990  630419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
I1115 10:58:14.682102  630419 provision.go:143] copyHostCerts
I1115 10:58:14.682146  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
I1115 10:58:14.682207  630419 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
I1115 10:58:14.682225  630419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
I1115 10:58:14.682322  630419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
I1115 10:58:14.682405  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
I1115 10:58:14.682426  630419 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
I1115 10:58:14.682445  630419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
I1115 10:58:14.682477  630419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
I1115 10:58:14.682521  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
I1115 10:58:14.682536  630419 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
I1115 10:58:14.682540  630419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
I1115 10:58:14.682564  630419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
I1115 10:58:14.682608  630419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
I1115 10:58:15.647535  630419 provision.go:177] copyRemoteCerts
I1115 10:58:15.647603  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1115 10:58:15.647650  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:15.667452  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
I1115 10:58:15.778694  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1115 10:58:15.778759  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1115 10:58:15.825160  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1115 10:58:15.825224  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1115 10:58:15.866553  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
I1115 10:58:15.866618  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1115 10:58:15.891284  630419 provision.go:87] duration metric: took 1.241331965s to configureAuth
I1115 10:58:15.891312  630419 ubuntu.go:206] setting minikube options for container-runtime
I1115 10:58:15.891547  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:58:15.891665  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:15.918280  630419 main.go:143] libmachine: Using SSH client type: native
I1115 10:58:15.918599  630419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33544 <nil> <nil>}
I1115 10:58:15.918614  630419 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1115 10:58:17.386429  630419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

                                                
                                                
I1115 10:58:17.386472  630419 machine.go:97] duration metric: took 6.53904776s to provisionDockerMachine
I1115 10:58:17.386483  630419 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
I1115 10:58:17.386493  630419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1115 10:58:17.386587  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1115 10:58:17.386639  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:17.404287  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
I1115 10:58:17.512850  630419 ssh_runner.go:195] Run: cat /etc/os-release
I1115 10:58:17.516363  630419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1115 10:58:17.516394  630419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1115 10:58:17.516407  630419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
I1115 10:58:17.516470  630419 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
I1115 10:58:17.516555  630419 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
I1115 10:58:17.516567  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
I1115 10:58:17.516666  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1115 10:58:17.524150  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
I1115 10:58:17.548278  630419 start.go:296] duration metric: took 161.779844ms for postStartSetup
I1115 10:58:17.548371  630419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1115 10:58:17.548410  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:17.566897  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
I1115 10:58:17.674627  630419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1115 10:58:17.679639  630419 fix.go:56] duration metric: took 7.183447348s for fixHost
I1115 10:58:17.679660  630419 start.go:83] releasing machines lock for "ha-439113-m02", held for 7.183496432s
I1115 10:58:17.679740  630419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
I1115 10:58:17.698090  630419 ssh_runner.go:195] Run: systemctl --version
I1115 10:58:17.698141  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:17.698377  630419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1115 10:58:17.698444  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
I1115 10:58:17.726184  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
I1115 10:58:17.738444  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
I1115 10:58:17.855433  630419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1115 10:58:17.968607  630419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1115 10:58:17.975680  630419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1115 10:58:17.975754  630419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1115 10:58:17.990032  630419 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1115 10:58:17.990066  630419 start.go:496] detecting cgroup driver to use...
I1115 10:58:17.990100  630419 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1115 10:58:17.990164  630419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1115 10:58:18.016078  630419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1115 10:58:18.039244  630419 docker.go:218] disabling cri-docker service (if available) ...
I1115 10:58:18.039340  630419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1115 10:58:18.066497  630419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1115 10:58:18.095457  630419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1115 10:58:18.402520  630419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1115 10:58:18.638606  630419 docker.go:234] disabling docker service ...
I1115 10:58:18.638714  630419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1115 10:58:18.666005  630419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1115 10:58:18.687038  630419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1115 10:58:18.946716  630419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1115 10:58:19.201781  630419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1115 10:58:19.223206  630419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1115 10:58:19.247329  630419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1115 10:58:19.247445  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1115 10:58:19.260220  630419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1115 10:58:19.260330  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1115 10:58:19.277311  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1115 10:58:19.295270  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1115 10:58:19.309034  630419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1115 10:58:19.325882  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1115 10:58:19.341268  630419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1115 10:58:19.363375  630419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1115 10:58:19.378202  630419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1115 10:58:19.389364  630419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1115 10:58:19.405399  630419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1115 10:58:19.624721  630419 ssh_runner.go:195] Run: sudo systemctl restart crio
I1115 10:59:49.858069  630419 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.233245048s)
I1115 10:59:49.858112  630419 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1115 10:59:49.858184  630419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1115 10:59:49.863569  630419 start.go:564] Will wait 60s for crictl version
I1115 10:59:49.863630  630419 ssh_runner.go:195] Run: which crictl
I1115 10:59:49.867776  630419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1115 10:59:49.898622  630419 start.go:580] Version:  0.1.0
RuntimeName:  cri-o
RuntimeVersion:  1.34.1
RuntimeApiVersion:  v1
I1115 10:59:49.898716  630419 ssh_runner.go:195] Run: crio --version
I1115 10:59:49.928553  630419 ssh_runner.go:195] Run: crio --version
I1115 10:59:49.962124  630419 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
I1115 10:59:49.965145  630419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1115 10:59:50.036504  630419 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 10:59:50.025473673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1115 10:59:50.036663  630419 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1115 10:59:50.059575  630419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
I1115 10:59:50.065444  630419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1115 10:59:50.075650  630419 mustload.go:66] Loading cluster: ha-439113
I1115 10:59:50.075898  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:59:50.076172  630419 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
I1115 10:59:50.099180  630419 host.go:66] Checking if "ha-439113" exists ...
I1115 10:59:50.099495  630419 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
I1115 10:59:50.099512  630419 certs.go:195] generating shared ca certs ...
I1115 10:59:50.099537  630419 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 10:59:50.099727  630419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
I1115 10:59:50.099779  630419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
I1115 10:59:50.099791  630419 certs.go:257] generating profile certs ...
I1115 10:59:50.099880  630419 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
I1115 10:59:50.099914  630419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938
I1115 10:59:50.099935  630419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
I1115 10:59:50.512899  630419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938 ...
I1115 10:59:50.512934  630419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938: {Name:mk29ea75749cc8a74d3bccc2390d33fcbf97f44c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 10:59:50.513140  630419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938 ...
I1115 10:59:50.513157  630419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938: {Name:mk813a8a2137a06fa128f13a44cc1d9be451a1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 10:59:50.513249  630419 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.e487d938 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
I1115 10:59:50.513394  630419 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.e487d938 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
I1115 10:59:50.513574  630419 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
I1115 10:59:50.513592  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1115 10:59:50.513609  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1115 10:59:50.513626  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1115 10:59:50.513646  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1115 10:59:50.513662  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1115 10:59:50.513673  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1115 10:59:50.513688  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1115 10:59:50.513699  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1115 10:59:50.513752  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
W1115 10:59:50.513784  630419 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
I1115 10:59:50.513792  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
I1115 10:59:50.513818  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
I1115 10:59:50.513840  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
I1115 10:59:50.513870  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
I1115 10:59:50.513918  630419 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
I1115 10:59:50.513955  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
I1115 10:59:50.513972  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1115 10:59:50.513987  630419 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
I1115 10:59:50.514044  630419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
I1115 10:59:50.531342  630419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
I1115 10:59:50.629240  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I1115 10:59:50.633353  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I1115 10:59:50.641938  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I1115 10:59:50.645800  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
I1115 10:59:50.654801  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I1115 10:59:50.658702  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I1115 10:59:50.666910  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I1115 10:59:50.671335  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I1115 10:59:50.680198  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I1115 10:59:50.683971  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I1115 10:59:50.692941  630419 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I1115 10:59:50.696834  630419 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I1115 10:59:50.706131  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1115 10:59:50.726772  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1115 10:59:50.747008  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1115 10:59:50.767273  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1115 10:59:50.797811  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I1115 10:59:50.819892  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1115 10:59:50.845472  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1115 10:59:50.867121  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1115 10:59:50.893240  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
I1115 10:59:50.917661  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1115 10:59:50.937259  630419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
I1115 10:59:50.967384  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I1115 10:59:50.985679  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
I1115 10:59:51.004975  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I1115 10:59:51.026780  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I1115 10:59:51.042632  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I1115 10:59:51.060195  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I1115 10:59:51.075842  630419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I1115 10:59:51.090338  630419 ssh_runner.go:195] Run: openssl version
I1115 10:59:51.097485  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
I1115 10:59:51.107651  630419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
I1115 10:59:51.112414  630419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
I1115 10:59:51.112496  630419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
I1115 10:59:51.155634  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
I1115 10:59:51.164691  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1115 10:59:51.176463  630419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1115 10:59:51.180851  630419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
I1115 10:59:51.181101  630419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1115 10:59:51.227059  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1115 10:59:51.235146  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
I1115 10:59:51.243562  630419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
I1115 10:59:51.247373  630419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
I1115 10:59:51.247443  630419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
I1115 10:59:51.291544  630419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
I1115 10:59:51.300840  630419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1115 10:59:51.312408  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1115 10:59:51.361121  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1115 10:59:51.404973  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1115 10:59:51.457427  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1115 10:59:51.499348  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1115 10:59:51.542528  630419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1115 10:59:51.586557  630419 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
I1115 10:59:51.586707  630419 kubeadm.go:947] kubelet [Unit]
Wants=crio.service

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1115 10:59:51.586742  630419 kube-vip.go:115] generating kube-vip config ...
I1115 10:59:51.586802  630419 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
I1115 10:59:51.600019  630419 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
stdout:

                                                
                                                
stderr:
I1115 10:59:51.600139  630419 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.49.254
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v1.0.1
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I1115 10:59:51.600219  630419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1115 10:59:51.608017  630419 binaries.go:51] Found k8s binaries, skipping transfer
I1115 10:59:51.608105  630419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I1115 10:59:51.616100  630419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1115 10:59:51.630054  630419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1115 10:59:51.643721  630419 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
I1115 10:59:51.660541  630419 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
I1115 10:59:51.664639  630419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1115 10:59:51.675728  630419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1115 10:59:51.824993  630419 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1115 10:59:51.845485  630419 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1115 10:59:51.845643  630419 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1115 10:59:51.845845  630419 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:59:51.849497  630419 out.go:179] * Enabled addons: 
I1115 10:59:51.849643  630419 out.go:179] * Verifying Kubernetes components...
I1115 10:59:51.852388  630419 addons.go:515] duration metric: took 6.748705ms for enable addons: enabled=[]
I1115 10:59:51.852486  630419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1115 10:59:51.993603  630419 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1115 10:59:52.024172  630419 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W1115 10:59:52.024338  630419 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
I1115 10:59:52.024946  630419 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
I1115 10:59:52.024848  630419 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1115 10:59:52.025051  630419 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1115 10:59:52.025072  630419 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1115 10:59:52.025117  630419 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1115 10:59:52.025142  630419 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1115 10:59:52.025428  630419 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
W1115 10:59:54.029461  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 10:59:56.029761  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 10:59:58.030327  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:00.032832  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:02.047014  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:04.529043  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:06.529808  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:08.530004  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:10.530504  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:13.029959  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:15.038101  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:17.530159  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:20.030591  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:22.529235  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:24.529427  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:27.029215  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:29.029427  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:31.031034  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:33.530158  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:36.030394  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:38.030596  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:40.040209  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:42.531199  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:45.044147  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:47.529747  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:50.030088  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:52.530633  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:55.030884  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:57.528806  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:00:59.529391  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:01.530680  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:04.029508  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:06.030518  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:08.529689  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:11.029734  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:13.528847  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:15.531784  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:17.539135  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:20.029710  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:22.029950  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:24.529572  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:26.529619  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:29.029700  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:31.029990  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:33.030872  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:35.529614  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:37.529942  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:39.530532  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:41.540178  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:44.030478  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:46.529728  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:49.029294  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:51.029706  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:53.529186  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:55.529895  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:01:57.532148  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:00.044323  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:02.529302  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:04.529802  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:07.029251  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:09.529128  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:11.529171  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:13.538771  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:16.029155  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:18.029372  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:20.030366  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:22.530628  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:25.029889  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:27.530268  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:29.530595  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:32.030480  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:34.528848  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:36.530189  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:39.029462  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:41.029707  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:43.529738  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:45.532662  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:48.029798  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:50.529695  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:53.029388  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:55.029642  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:57.530323  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:02:59.531555  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:02.029372  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:04.029682  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:06.030044  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:08.030238  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:10.030682  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:12.531707  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:15.033328  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:17.529568  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:20.030975  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:22.529899  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:25.029587  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:27.528935  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:29.530147  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:32.029638  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:34.034829  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:36.529487  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:38.531875  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:41.030254  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:43.529140  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:45.529852  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:48.029447  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:50.029779  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:52.528797  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:54.529151  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:56.529562  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:03:59.029176  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:01.535346  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:04.029490  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:06.529168  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:09.029311  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:11.030408  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:13.032486  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:15.036911  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:17.528952  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:19.529621  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:21.530055  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:24.029949  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:26.529957  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:29.029141  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:31.529598  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:34.029208  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:36.029668  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:38.030014  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:40.041175  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:42.529325  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:44.529838  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:47.029424  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:49.029873  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:51.528695  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:53.530114  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:56.029622  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:04:58.029671  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:00.029832  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:02.030114  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:04.528677  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:07.029598  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:09.029676  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:11.529207  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:13.536090  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:16.030784  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:18.529178  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:20.529503  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:23.028730  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:25.031438  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:27.529265  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:29.529967  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:32.029634  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:34.528937  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:36.529312  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:38.529907  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:41.029854  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:43.529586  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:45.539532  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:48.029029  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
W1115 11:05:50.030447  630419 node_ready.go:57] node "ha-439113-m02" has "Ready":"Unknown" status (will retry)
I1115 11:05:52.026364  630419 node_ready.go:38] duration metric: took 6m0.000784363s for node "ha-439113-m02" to be "Ready" ...
I1115 11:05:52.029829  630419 out.go:203] 
W1115 11:05:52.032652  630419 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
W1115 11:05:52.032680  630419 out.go:285] * 
* 
W1115 11:05:52.038824  630419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1115 11:05:52.041971  630419 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-arm64 -p ha-439113 node start m02 --alsologtostderr -v 5": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (1.082155219s)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:05:52.162532  632293 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:05:52.162721  632293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:05:52.162748  632293 out.go:374] Setting ErrFile to fd 2...
	I1115 11:05:52.162767  632293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:05:52.163064  632293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:05:52.163297  632293 out.go:368] Setting JSON to false
	I1115 11:05:52.163373  632293 mustload.go:66] Loading cluster: ha-439113
	I1115 11:05:52.163430  632293 notify.go:221] Checking for updates...
	I1115 11:05:52.164808  632293 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:05:52.164997  632293 status.go:174] checking status of ha-439113 ...
	I1115 11:05:52.167104  632293 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:05:52.219184  632293 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:05:52.219208  632293 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:05:52.219513  632293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:05:52.240733  632293 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:05:52.241100  632293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:52.241214  632293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:05:52.260304  632293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:05:52.375364  632293 ssh_runner.go:195] Run: systemctl --version
	I1115 11:05:52.384472  632293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:52.399014  632293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:05:52.493147  632293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:05:52.482872015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:05:52.493774  632293 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:52.493805  632293 api_server.go:166] Checking apiserver status ...
	I1115 11:05:52.493857  632293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:05:52.507285  632293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:05:52.516508  632293 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:05:52.516576  632293 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:05:52.524556  632293 api_server.go:204] freezer state: "THAWED"
	I1115 11:05:52.524585  632293 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:05:52.533197  632293 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:05:52.533227  632293 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:05:52.533239  632293 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:52.533284  632293 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:05:52.533619  632293 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:05:52.552051  632293 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:05:52.552078  632293 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:05:52.552372  632293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:05:52.570618  632293 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:05:52.570941  632293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:52.570989  632293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:05:52.596849  632293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:05:52.720101  632293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:52.734977  632293 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:52.735020  632293 api_server.go:166] Checking apiserver status ...
	I1115 11:05:52.735061  632293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:05:52.747184  632293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:05:52.747211  632293 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:05:52.747220  632293 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:52.747243  632293 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:05:52.747551  632293 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:05:52.765325  632293 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:05:52.765359  632293 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:05:52.765679  632293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:05:52.794207  632293 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:05:52.794519  632293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:52.794638  632293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:05:52.814833  632293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:05:52.922932  632293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:52.936972  632293 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:52.937004  632293 api_server.go:166] Checking apiserver status ...
	I1115 11:05:52.937054  632293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:05:52.951568  632293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:05:52.960283  632293 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:05:52.960351  632293 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:05:52.968253  632293 api_server.go:204] freezer state: "THAWED"
	I1115 11:05:52.968279  632293 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:05:52.977891  632293 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:05:52.977920  632293 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:05:52.977930  632293 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:52.977948  632293 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:05:52.978307  632293 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:05:53.000374  632293 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:05:53.000405  632293 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:05:53.000734  632293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:05:53.022892  632293 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:05:53.023226  632293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:53.023278  632293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:05:53.041763  632293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:05:53.146369  632293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:53.161628  632293 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:05:53.167758  586561 retry.go:31] will retry after 1.183143982s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (1.008734644s)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:05:54.406882  632485 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:05:54.407057  632485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:05:54.407070  632485 out.go:374] Setting ErrFile to fd 2...
	I1115 11:05:54.407075  632485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:05:54.407414  632485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:05:54.407743  632485 out.go:368] Setting JSON to false
	I1115 11:05:54.407797  632485 mustload.go:66] Loading cluster: ha-439113
	I1115 11:05:54.407848  632485 notify.go:221] Checking for updates...
	I1115 11:05:54.408830  632485 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:05:54.408949  632485 status.go:174] checking status of ha-439113 ...
	I1115 11:05:54.409555  632485 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:05:54.430649  632485 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:05:54.430674  632485 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:05:54.430976  632485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:05:54.457825  632485 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:05:54.458129  632485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:54.458184  632485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:05:54.479538  632485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:05:54.591393  632485 ssh_runner.go:195] Run: systemctl --version
	I1115 11:05:54.599297  632485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:54.614578  632485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:05:54.684376  632485 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:05:54.673456928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:05:54.685088  632485 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:54.685126  632485 api_server.go:166] Checking apiserver status ...
	I1115 11:05:54.685185  632485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:05:54.697999  632485 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:05:54.706883  632485 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:05:54.706961  632485 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:05:54.719275  632485 api_server.go:204] freezer state: "THAWED"
	I1115 11:05:54.719306  632485 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:05:54.729336  632485 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:05:54.729367  632485 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:05:54.729378  632485 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:54.729410  632485 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:05:54.729712  632485 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:05:54.749229  632485 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:05:54.749255  632485 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:05:54.749575  632485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:05:54.768841  632485 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:05:54.769261  632485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:54.769316  632485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:05:54.796387  632485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:05:54.907101  632485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:54.920604  632485 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:54.920631  632485 api_server.go:166] Checking apiserver status ...
	I1115 11:05:54.920672  632485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:05:54.932524  632485 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:05:54.932551  632485 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:05:54.932560  632485 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:54.932577  632485 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:05:54.932934  632485 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:05:54.953360  632485 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:05:54.953389  632485 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:05:54.953697  632485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:05:54.970497  632485 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:05:54.970818  632485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:54.970869  632485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:05:54.997391  632485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:05:55.118598  632485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:55.133281  632485 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:55.133313  632485 api_server.go:166] Checking apiserver status ...
	I1115 11:05:55.133360  632485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:05:55.145807  632485 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:05:55.155395  632485 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:05:55.155489  632485 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:05:55.164157  632485 api_server.go:204] freezer state: "THAWED"
	I1115 11:05:55.164185  632485 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:05:55.172724  632485 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:05:55.172796  632485 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:05:55.172825  632485 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:55.172988  632485 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:05:55.173350  632485 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:05:55.191247  632485 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:05:55.191278  632485 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:05:55.191587  632485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:05:55.209629  632485 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:05:55.209975  632485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:55.210021  632485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:05:55.227442  632485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:05:55.335036  632485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:55.352136  632485 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:05:55.360717  586561 retry.go:31] will retry after 1.320353192s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (992.179221ms)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:05:56.739544  632669 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:05:56.739779  632669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:05:56.739818  632669 out.go:374] Setting ErrFile to fd 2...
	I1115 11:05:56.739838  632669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:05:56.740153  632669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:05:56.740385  632669 out.go:368] Setting JSON to false
	I1115 11:05:56.740444  632669 mustload.go:66] Loading cluster: ha-439113
	I1115 11:05:56.740485  632669 notify.go:221] Checking for updates...
	I1115 11:05:56.741009  632669 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:05:56.741055  632669 status.go:174] checking status of ha-439113 ...
	I1115 11:05:56.741636  632669 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:05:56.764063  632669 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:05:56.764087  632669 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:05:56.764398  632669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:05:56.801584  632669 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:05:56.801885  632669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:56.801938  632669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:05:56.826821  632669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:05:56.930801  632669 ssh_runner.go:195] Run: systemctl --version
	I1115 11:05:56.937797  632669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:56.951321  632669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:05:57.020284  632669 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:05:57.00688233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:05:57.021791  632669 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:57.021832  632669 api_server.go:166] Checking apiserver status ...
	I1115 11:05:57.021884  632669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:05:57.037507  632669 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:05:57.047004  632669 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:05:57.047150  632669 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:05:57.055482  632669 api_server.go:204] freezer state: "THAWED"
	I1115 11:05:57.055511  632669 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:05:57.064110  632669 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:05:57.064143  632669 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:05:57.064156  632669 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:57.064174  632669 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:05:57.064486  632669 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:05:57.082163  632669 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:05:57.082187  632669 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:05:57.082475  632669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:05:57.102306  632669 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:05:57.102629  632669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:57.102673  632669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:05:57.120258  632669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:05:57.226499  632669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:57.239997  632669 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:57.240023  632669 api_server.go:166] Checking apiserver status ...
	I1115 11:05:57.240068  632669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:05:57.250301  632669 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:05:57.250328  632669 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:05:57.250341  632669 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:57.250358  632669 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:05:57.250649  632669 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:05:57.268551  632669 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:05:57.268582  632669 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:05:57.269080  632669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:05:57.287701  632669 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:05:57.288168  632669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:57.288230  632669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:05:57.312121  632669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:05:57.418648  632669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:57.434378  632669 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:05:57.434406  632669 api_server.go:166] Checking apiserver status ...
	I1115 11:05:57.434463  632669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:05:57.448430  632669 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:05:57.459103  632669 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:05:57.459181  632669 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:05:57.469641  632669 api_server.go:204] freezer state: "THAWED"
	I1115 11:05:57.469674  632669 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:05:57.478953  632669 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:05:57.478985  632669 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:05:57.478995  632669 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:05:57.479013  632669 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:05:57.479319  632669 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:05:57.497712  632669 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:05:57.497738  632669 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:05:57.498029  632669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:05:57.514843  632669 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:05:57.515284  632669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:05:57.515334  632669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:05:57.541482  632669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:05:57.654484  632669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:05:57.667785  632669 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:05:57.673930  586561 retry.go:31] will retry after 3.005281331s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (1.032722988s)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:06:00.732373  632859 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:06:00.732553  632859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:00.732564  632859 out.go:374] Setting ErrFile to fd 2...
	I1115 11:06:00.732569  632859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:00.732837  632859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:06:00.733126  632859 out.go:368] Setting JSON to false
	I1115 11:06:00.733163  632859 mustload.go:66] Loading cluster: ha-439113
	I1115 11:06:00.733237  632859 notify.go:221] Checking for updates...
	I1115 11:06:00.733649  632859 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:06:00.733669  632859 status.go:174] checking status of ha-439113 ...
	I1115 11:06:00.734231  632859 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:06:00.758409  632859 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:06:00.758438  632859 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:00.758757  632859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:06:00.788540  632859 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:00.788839  632859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:00.788979  632859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:06:00.818050  632859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:06:00.926879  632859 ssh_runner.go:195] Run: systemctl --version
	I1115 11:06:00.933401  632859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:00.947838  632859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:06:01.021619  632859 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:06:01.010504777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:06:01.022178  632859 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:01.022212  632859 api_server.go:166] Checking apiserver status ...
	I1115 11:06:01.022259  632859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:01.034395  632859 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:06:01.043604  632859 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:06:01.043677  632859 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:06:01.051499  632859 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:01.051527  632859 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:01.059970  632859 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:01.059997  632859 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:06:01.060009  632859 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:01.060027  632859 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:06:01.060360  632859 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:06:01.077287  632859 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:06:01.077313  632859 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:01.077619  632859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:06:01.103170  632859 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:01.103473  632859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:01.103550  632859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:06:01.122268  632859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:06:01.226641  632859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:01.241443  632859 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:01.241474  632859 api_server.go:166] Checking apiserver status ...
	I1115 11:06:01.241518  632859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:06:01.253956  632859 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:06:01.254000  632859 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:06:01.254011  632859 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:01.254027  632859 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:06:01.254346  632859 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:06:01.286546  632859 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:06:01.286585  632859 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:01.286957  632859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:06:01.315956  632859 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:01.316388  632859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:01.316452  632859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:06:01.348032  632859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:06:01.462571  632859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:01.477400  632859 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:01.477431  632859 api_server.go:166] Checking apiserver status ...
	I1115 11:06:01.477475  632859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:01.491566  632859 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:06:01.502027  632859 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:06:01.502124  632859 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:06:01.510191  632859 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:01.510226  632859 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:01.519795  632859 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:01.519824  632859 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:06:01.519834  632859 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:01.519852  632859 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:06:01.520163  632859 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:06:01.544657  632859 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:06:01.544686  632859 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:01.545086  632859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:06:01.567979  632859 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:01.568300  632859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:01.568360  632859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:06:01.585934  632859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:06:01.691743  632859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:01.707057  632859 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:06:01.713259  586561 retry.go:31] will retry after 5.051962124s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (996.458473ms)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:06:06.822766  633050 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:06:06.822942  633050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:06.822973  633050 out.go:374] Setting ErrFile to fd 2...
	I1115 11:06:06.822994  633050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:06.823250  633050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:06:06.823458  633050 out.go:368] Setting JSON to false
	I1115 11:06:06.823529  633050 mustload.go:66] Loading cluster: ha-439113
	I1115 11:06:06.823594  633050 notify.go:221] Checking for updates...
	I1115 11:06:06.825137  633050 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:06:06.825182  633050 status.go:174] checking status of ha-439113 ...
	I1115 11:06:06.826695  633050 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:06:06.865138  633050 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:06:06.865160  633050 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:06.865458  633050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:06:06.882416  633050 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:06.882709  633050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:06.882757  633050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:06:06.902152  633050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:06:07.010603  633050 ssh_runner.go:195] Run: systemctl --version
	I1115 11:06:07.019491  633050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:07.035275  633050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:06:07.105794  633050 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:06:07.096384241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:06:07.106335  633050 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:07.106367  633050 api_server.go:166] Checking apiserver status ...
	I1115 11:06:07.106413  633050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:07.120363  633050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:06:07.129548  633050 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:06:07.129621  633050 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:06:07.141698  633050 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:07.141726  633050 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:07.151245  633050 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:07.151273  633050 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:06:07.151285  633050 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:07.151337  633050 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:06:07.151663  633050 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:06:07.175513  633050 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:06:07.175555  633050 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:07.175860  633050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:06:07.198505  633050 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:07.198881  633050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:07.198936  633050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:06:07.218970  633050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:06:07.326547  633050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:07.340027  633050 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:07.340053  633050 api_server.go:166] Checking apiserver status ...
	I1115 11:06:07.340091  633050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:06:07.358991  633050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:06:07.359031  633050 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:06:07.359040  633050 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:07.359065  633050 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:06:07.359399  633050 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:06:07.376536  633050 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:06:07.376566  633050 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:07.376938  633050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:06:07.397589  633050 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:07.397947  633050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:07.397992  633050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:06:07.415915  633050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:06:07.527391  633050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:07.541468  633050 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:07.541501  633050 api_server.go:166] Checking apiserver status ...
	I1115 11:06:07.541542  633050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:07.557720  633050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:06:07.566981  633050 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:06:07.567049  633050 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:06:07.576415  633050 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:07.576449  633050 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:07.585317  633050 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:07.585345  633050 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:06:07.585354  633050 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:07.585398  633050 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:06:07.585753  633050 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:06:07.603681  633050 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:06:07.603709  633050 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:07.604146  633050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:06:07.621575  633050 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:07.621877  633050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:07.621925  633050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:06:07.641471  633050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:06:07.746469  633050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:07.759779  633050 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:06:07.765707  586561 retry.go:31] will retry after 5.617783542s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (1.004445642s)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:06:13.431529  633234 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:06:13.431763  633234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:13.431791  633234 out.go:374] Setting ErrFile to fd 2...
	I1115 11:06:13.431810  633234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:13.432095  633234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:06:13.432316  633234 out.go:368] Setting JSON to false
	I1115 11:06:13.432375  633234 mustload.go:66] Loading cluster: ha-439113
	I1115 11:06:13.432412  633234 notify.go:221] Checking for updates...
	I1115 11:06:13.432909  633234 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:06:13.432961  633234 status.go:174] checking status of ha-439113 ...
	I1115 11:06:13.433519  633234 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:06:13.477030  633234 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:06:13.477097  633234 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:13.477454  633234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:06:13.497109  633234 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:13.497643  633234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:13.497688  633234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:06:13.522923  633234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:06:13.626514  633234 ssh_runner.go:195] Run: systemctl --version
	I1115 11:06:13.633155  633234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:13.648702  633234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:06:13.723277  633234 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:06:13.711764347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:06:13.723806  633234 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:13.723834  633234 api_server.go:166] Checking apiserver status ...
	I1115 11:06:13.723882  633234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:13.738011  633234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:06:13.746755  633234 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:06:13.746830  633234 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:06:13.755052  633234 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:13.755080  633234 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:13.763446  633234 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:13.763480  633234 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:06:13.763500  633234 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:13.763521  633234 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:06:13.763841  633234 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:06:13.792533  633234 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:06:13.793101  633234 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:13.793434  633234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:06:13.814658  633234 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:13.814972  633234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:13.815020  633234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:06:13.834033  633234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:06:13.946522  633234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:13.961457  633234 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:13.961482  633234 api_server.go:166] Checking apiserver status ...
	I1115 11:06:13.961527  633234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:06:13.971938  633234 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:06:13.971958  633234 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:06:13.971968  633234 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:13.971984  633234 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:06:13.972290  633234 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:06:13.990482  633234 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:06:13.990511  633234 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:13.990904  633234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:06:14.012780  633234 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:14.013218  633234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:14.013272  633234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:06:14.038121  633234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:06:14.146644  633234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:14.161334  633234 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:14.161365  633234 api_server.go:166] Checking apiserver status ...
	I1115 11:06:14.161412  633234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:14.175725  633234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:06:14.186134  633234 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:06:14.186234  633234 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:06:14.200440  633234 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:14.200468  633234 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:14.209149  633234 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:14.209184  633234 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:06:14.209207  633234 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:14.209226  633234 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:06:14.209554  633234 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:06:14.226276  633234 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:06:14.226302  633234 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:14.226639  633234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:06:14.245632  633234 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:14.245926  633234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:14.245964  633234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:06:14.263588  633234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:06:14.366279  633234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:14.382387  633234 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:06:14.389075  586561 retry.go:31] will retry after 6.742264226s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (1.020349217s)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:06:21.175970  633417 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:06:21.176154  633417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:21.176181  633417 out.go:374] Setting ErrFile to fd 2...
	I1115 11:06:21.176199  633417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:21.176473  633417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:06:21.176687  633417 out.go:368] Setting JSON to false
	I1115 11:06:21.176746  633417 mustload.go:66] Loading cluster: ha-439113
	I1115 11:06:21.176783  633417 notify.go:221] Checking for updates...
	I1115 11:06:21.177246  633417 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:06:21.177287  633417 status.go:174] checking status of ha-439113 ...
	I1115 11:06:21.177855  633417 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:06:21.202134  633417 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:06:21.202157  633417 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:21.202467  633417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:06:21.230097  633417 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:21.230395  633417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:21.230434  633417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:06:21.259185  633417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:06:21.382985  633417 ssh_runner.go:195] Run: systemctl --version
	I1115 11:06:21.390252  633417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:21.405719  633417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:06:21.487158  633417 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:06:21.477680877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:06:21.487695  633417 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:21.487725  633417 api_server.go:166] Checking apiserver status ...
	I1115 11:06:21.487765  633417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:21.500352  633417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:06:21.509557  633417 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:06:21.509643  633417 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:06:21.517584  633417 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:21.517617  633417 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:21.528281  633417 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:21.528356  633417 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:06:21.528388  633417 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:21.528431  633417 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:06:21.528763  633417 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:06:21.548676  633417 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:06:21.548703  633417 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:21.549147  633417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:06:21.570580  633417 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:21.570877  633417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:21.570926  633417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:06:21.593052  633417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:06:21.698952  633417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:21.713255  633417 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:21.713286  633417 api_server.go:166] Checking apiserver status ...
	I1115 11:06:21.713340  633417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:06:21.724357  633417 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:06:21.724382  633417 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:06:21.724393  633417 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:21.724409  633417 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:06:21.724726  633417 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:06:21.744683  633417 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:06:21.744720  633417 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:21.745081  633417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:06:21.762286  633417 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:21.762646  633417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:21.762693  633417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:06:21.793823  633417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:06:21.902888  633417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:21.919782  633417 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:21.919832  633417 api_server.go:166] Checking apiserver status ...
	I1115 11:06:21.919871  633417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:21.932374  633417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:06:21.940795  633417 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:06:21.940908  633417 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:06:21.948374  633417 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:21.948414  633417 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:21.957082  633417 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:21.957113  633417 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:06:21.957123  633417 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:21.957140  633417 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:06:21.957444  633417 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:06:21.974552  633417 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:06:21.974582  633417 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:21.974882  633417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:06:21.999295  633417 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:21.999609  633417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:21.999666  633417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:06:22.023255  633417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:06:22.130597  633417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:22.143627  633417 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:06:22.152198  586561 retry.go:31] will retry after 14.525700781s: exit status 2
E1115 11:06:22.372214  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (1.008981866s)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:06:36.726566  633605 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:06:36.726691  633605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:36.726702  633605 out.go:374] Setting ErrFile to fd 2...
	I1115 11:06:36.726707  633605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:36.727037  633605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:06:36.727234  633605 out.go:368] Setting JSON to false
	I1115 11:06:36.727263  633605 mustload.go:66] Loading cluster: ha-439113
	I1115 11:06:36.727659  633605 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:06:36.727678  633605 status.go:174] checking status of ha-439113 ...
	I1115 11:06:36.728515  633605 notify.go:221] Checking for updates...
	I1115 11:06:36.729402  633605 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:06:36.750532  633605 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:06:36.750554  633605 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:36.750905  633605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:06:36.795367  633605 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:36.795687  633605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:36.795727  633605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:06:36.837797  633605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:06:36.942311  633605 ssh_runner.go:195] Run: systemctl --version
	I1115 11:06:36.948570  633605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:36.962482  633605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:06:37.033085  633605 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:06:37.021360813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:06:37.033832  633605 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:37.033875  633605 api_server.go:166] Checking apiserver status ...
	I1115 11:06:37.033934  633605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:37.048228  633605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:06:37.057922  633605 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:06:37.058005  633605 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:06:37.066731  633605 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:37.066757  633605 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:37.076028  633605 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:37.076060  633605 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:06:37.076072  633605 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:37.076103  633605 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:06:37.076412  633605 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:06:37.096977  633605 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:06:37.097003  633605 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:37.097368  633605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:06:37.116651  633605 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:37.117016  633605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:37.117064  633605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:06:37.137060  633605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:06:37.246441  633605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:37.260069  633605 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:37.260099  633605 api_server.go:166] Checking apiserver status ...
	I1115 11:06:37.260140  633605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:06:37.270163  633605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:06:37.270187  633605 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:06:37.270197  633605 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:37.270213  633605 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:06:37.270528  633605 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:06:37.287074  633605 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:06:37.287106  633605 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:37.287398  633605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:06:37.313748  633605 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:37.314070  633605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:37.314124  633605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:06:37.339716  633605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:06:37.446664  633605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:37.461008  633605 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:37.461043  633605 api_server.go:166] Checking apiserver status ...
	I1115 11:06:37.461091  633605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:37.472594  633605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:06:37.481281  633605 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:06:37.481365  633605 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:06:37.489170  633605 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:37.489201  633605 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:37.497430  633605 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:37.497458  633605 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:06:37.497467  633605 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:37.497493  633605 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:06:37.497813  633605 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:06:37.515231  633605 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:06:37.515259  633605 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:37.515557  633605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:06:37.539961  633605 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:37.540269  633605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:37.540307  633605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:06:37.562120  633605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:06:37.667191  633605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:37.680294  633605 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1115 11:06:37.688640  586561 retry.go:31] will retry after 10.813730239s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 2 (1.001784975s)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:06:48.565363  633795 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:06:48.565578  633795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:48.565606  633795 out.go:374] Setting ErrFile to fd 2...
	I1115 11:06:48.565624  633795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:06:48.565928  633795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:06:48.566161  633795 out.go:368] Setting JSON to false
	I1115 11:06:48.566220  633795 mustload.go:66] Loading cluster: ha-439113
	I1115 11:06:48.566309  633795 notify.go:221] Checking for updates...
	I1115 11:06:48.566699  633795 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:06:48.566858  633795 status.go:174] checking status of ha-439113 ...
	I1115 11:06:48.567881  633795 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:06:48.588183  633795 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 11:06:48.588211  633795 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:48.588527  633795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:06:48.614708  633795 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:06:48.615000  633795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:48.615057  633795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:06:48.637537  633795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:06:48.742636  633795 ssh_runner.go:195] Run: systemctl --version
	I1115 11:06:48.749211  633795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:48.763004  633795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:06:48.870496  633795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-15 11:06:48.861213459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:06:48.871122  633795 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:48.871154  633795 api_server.go:166] Checking apiserver status ...
	I1115 11:06:48.871198  633795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:48.884353  633795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 11:06:48.893391  633795 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 11:06:48.893471  633795 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 11:06:48.901508  633795 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:48.901536  633795 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:48.910181  633795 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:48.910211  633795 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 11:06:48.910222  633795 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:48.910262  633795 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:06:48.910588  633795 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:06:48.929609  633795 status.go:371] ha-439113-m02 host status = "Running" (err=<nil>)
	I1115 11:06:48.929640  633795 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:48.929944  633795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:06:48.947902  633795 host.go:66] Checking if "ha-439113-m02" exists ...
	I1115 11:06:48.948216  633795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:48.948276  633795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:06:48.969721  633795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:06:49.074518  633795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:49.087580  633795 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:49.087610  633795 api_server.go:166] Checking apiserver status ...
	I1115 11:06:49.087650  633795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 11:06:49.098503  633795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:06:49.098580  633795 status.go:463] ha-439113-m02 apiserver status = Running (err=<nil>)
	I1115 11:06:49.098619  633795 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:49.098649  633795 status.go:174] checking status of ha-439113-m03 ...
	I1115 11:06:49.098972  633795 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 11:06:49.115931  633795 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 11:06:49.115970  633795 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:49.116286  633795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 11:06:49.137637  633795 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 11:06:49.137982  633795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:49.138030  633795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 11:06:49.159349  633795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 11:06:49.262952  633795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:49.277151  633795 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 11:06:49.277181  633795 api_server.go:166] Checking apiserver status ...
	I1115 11:06:49.277235  633795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:06:49.289221  633795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 11:06:49.297609  633795 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 11:06:49.297725  633795 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 11:06:49.305524  633795 api_server.go:204] freezer state: "THAWED"
	I1115 11:06:49.305562  633795 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 11:06:49.314035  633795 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 11:06:49.314062  633795 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 11:06:49.314072  633795 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:06:49.314089  633795 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:06:49.314397  633795 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:06:49.341458  633795 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 11:06:49.341480  633795 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:49.341795  633795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:06:49.361024  633795 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 11:06:49.361324  633795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:06:49.361386  633795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:06:49.381842  633795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:06:49.487197  633795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:06:49.501232  633795 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-439113
helpers_test.go:243: (dbg) docker inspect ha-439113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	        "Created": "2025-11-15T10:52:17.169946413Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 616217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:52:17.244124933Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hosts",
	        "LogPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc-json.log",
	        "Name": "/ha-439113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-439113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-439113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	                "LowerDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-439113",
	                "Source": "/var/lib/docker/volumes/ha-439113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-439113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-439113",
	                "name.minikube.sigs.k8s.io": "ha-439113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b8649b658807d1e28bfc43925c48d4d32daddec11cb9f766be693df9a73c857",
	            "SandboxKey": "/var/run/docker/netns/4b8649b65880",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-439113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:6e:3e:a3:f6:71",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70b4341e58399e11a79033573f4328a7d843f08aeced339b6115cf0c5d327007",
	                    "EndpointID": "0a4055c126d7ee276ccb0bdcb15555844a98e2e6d37a65e167b535cc8f74d59b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-439113",
	                        "d546a4fc19d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-439113 -n ha-439113
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 logs -n 25: (1.395753209s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m03_ha-439113.txt                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113.txt                                                 │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m03_ha-439113-m02.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m02 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m02.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m04:/home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp testdata/cp-test.txt ha-439113-m04:/home/docker/cp-test.txt                                                             │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m04.txt │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m04_ha-439113.txt                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113.txt                                                 │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m02 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ node    │ ha-439113 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:58 UTC │
	│ node    │ ha-439113 node start m02 --alsologtostderr -v 5                                                                                      │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:52:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:52:11.684114  615834 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:52:11.684311  615834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:52:11.684321  615834 out.go:374] Setting ErrFile to fd 2...
	I1115 10:52:11.684332  615834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:52:11.684635  615834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:52:11.685086  615834 out.go:368] Setting JSON to false
	I1115 10:52:11.686005  615834 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9283,"bootTime":1763194649,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:52:11.686077  615834 start.go:143] virtualization:  
	I1115 10:52:11.690439  615834 out.go:179] * [ha-439113] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:52:11.695356  615834 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:52:11.695439  615834 notify.go:221] Checking for updates...
	I1115 10:52:11.702671  615834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:52:11.706147  615834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:52:11.709608  615834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:52:11.712812  615834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:52:11.716100  615834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:52:11.719602  615834 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:52:11.738907  615834 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:52:11.739038  615834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:52:11.803656  615834 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 10:52:11.794481335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:52:11.803768  615834 docker.go:319] overlay module found
	I1115 10:52:11.809139  615834 out.go:179] * Using the docker driver based on user configuration
	I1115 10:52:11.812090  615834 start.go:309] selected driver: docker
	I1115 10:52:11.812109  615834 start.go:930] validating driver "docker" against <nil>
	I1115 10:52:11.812123  615834 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:52:11.812965  615834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:52:11.867553  615834 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 10:52:11.858068036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:52:11.867723  615834 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:52:11.867964  615834 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:52:11.870990  615834 out.go:179] * Using Docker driver with root privileges
	I1115 10:52:11.873936  615834 cni.go:84] Creating CNI manager for ""
	I1115 10:52:11.874009  615834 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1115 10:52:11.874022  615834 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:52:11.874110  615834 start.go:353] cluster config:
	{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1115 10:52:11.877211  615834 out.go:179] * Starting "ha-439113" primary control-plane node in "ha-439113" cluster
	I1115 10:52:11.880108  615834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:52:11.883156  615834 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:52:11.885997  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:11.886049  615834 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:52:11.886066  615834 cache.go:65] Caching tarball of preloaded images
	I1115 10:52:11.886082  615834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:52:11.886149  615834 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:52:11.886160  615834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:52:11.886506  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:11.886537  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json: {Name:mk503d89be400de3662f84cf87d45d7e7cbd7d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:11.906117  615834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:52:11.906142  615834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:52:11.906161  615834 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:52:11.906185  615834 start.go:360] acquireMachinesLock for ha-439113: {Name:mk8f5fddf42cbee62c5cd775824daee5e174c730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:52:11.906292  615834 start.go:364] duration metric: took 86.18µs to acquireMachinesLock for "ha-439113"
	I1115 10:52:11.906323  615834 start.go:93] Provisioning new machine with config: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:52:11.906401  615834 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:52:11.909900  615834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:52:11.910149  615834 start.go:159] libmachine.API.Create for "ha-439113" (driver="docker")
	I1115 10:52:11.910196  615834 client.go:173] LocalClient.Create starting
	I1115 10:52:11.910286  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:52:11.910325  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:11.910347  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:11.910403  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:52:11.910431  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:11.910445  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:11.910811  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:52:11.926803  615834 cli_runner.go:211] docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:52:11.926899  615834 network_create.go:284] running [docker network inspect ha-439113] to gather additional debugging logs...
	I1115 10:52:11.926919  615834 cli_runner.go:164] Run: docker network inspect ha-439113
	W1115 10:52:11.942752  615834 cli_runner.go:211] docker network inspect ha-439113 returned with exit code 1
	I1115 10:52:11.942781  615834 network_create.go:287] error running [docker network inspect ha-439113]: docker network inspect ha-439113: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-439113 not found
	I1115 10:52:11.942795  615834 network_create.go:289] output of [docker network inspect ha-439113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-439113 not found
	
	** /stderr **
	I1115 10:52:11.942897  615834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:52:11.959384  615834 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018caf60}
	I1115 10:52:11.959435  615834 network_create.go:124] attempt to create docker network ha-439113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 10:52:11.959497  615834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-439113 ha-439113
	I1115 10:52:12.027139  615834 network_create.go:108] docker network ha-439113 192.168.49.0/24 created
	I1115 10:52:12.027175  615834 kic.go:121] calculated static IP "192.168.49.2" for the "ha-439113" container
	I1115 10:52:12.027259  615834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:52:12.044026  615834 cli_runner.go:164] Run: docker volume create ha-439113 --label name.minikube.sigs.k8s.io=ha-439113 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:52:12.062229  615834 oci.go:103] Successfully created a docker volume ha-439113
	I1115 10:52:12.062343  615834 cli_runner.go:164] Run: docker run --rm --name ha-439113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113 --entrypoint /usr/bin/test -v ha-439113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:52:12.627885  615834 oci.go:107] Successfully prepared a docker volume ha-439113
	I1115 10:52:12.627981  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:12.627997  615834 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:52:12.628073  615834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:52:17.096843  615834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.468725781s)
	I1115 10:52:17.096922  615834 kic.go:203] duration metric: took 4.468921057s to extract preloaded images to volume ...
	W1115 10:52:17.097066  615834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:52:17.097180  615834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:52:17.154778  615834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-439113 --name ha-439113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-439113 --network ha-439113 --ip 192.168.49.2 --volume ha-439113:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:52:17.461306  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Running}}
	I1115 10:52:17.480158  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:17.506964  615834 cli_runner.go:164] Run: docker exec ha-439113 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:52:17.561103  615834 oci.go:144] the created container "ha-439113" has a running status.
	I1115 10:52:17.561143  615834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa...
	I1115 10:52:17.707967  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1115 10:52:17.708016  615834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:52:17.736735  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:17.766109  615834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:52:17.766130  615834 kic_runner.go:114] Args: [docker exec --privileged ha-439113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:52:17.827825  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:17.862302  615834 machine.go:94] provisionDockerMachine start ...
	I1115 10:52:17.862429  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:17.886994  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:17.887345  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:17.887355  615834 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:52:17.888177  615834 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57300->127.0.0.1:33524: read: connection reset by peer
	I1115 10:52:21.040625  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 10:52:21.040656  615834 ubuntu.go:182] provisioning hostname "ha-439113"
	I1115 10:52:21.040728  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.057994  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:21.058308  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:21.058324  615834 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113 && echo "ha-439113" | sudo tee /etc/hostname
	I1115 10:52:21.218262  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 10:52:21.218365  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.236459  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:21.236769  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:21.236791  615834 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:52:21.389265  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:52:21.389335  615834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:52:21.389362  615834 ubuntu.go:190] setting up certificates
	I1115 10:52:21.389388  615834 provision.go:84] configureAuth start
	I1115 10:52:21.389458  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 10:52:21.407404  615834 provision.go:143] copyHostCerts
	I1115 10:52:21.407451  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:21.407485  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:52:21.407498  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:21.407598  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:52:21.407696  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:21.407722  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:52:21.407732  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:21.407760  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:52:21.407821  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:21.407848  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:52:21.407856  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:21.407881  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:52:21.407942  615834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113 san=[127.0.0.1 192.168.49.2 ha-439113 localhost minikube]
	I1115 10:52:21.601128  615834 provision.go:177] copyRemoteCerts
	I1115 10:52:21.601196  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:52:21.601243  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.618059  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:21.720640  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 10:52:21.720702  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:52:21.738499  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 10:52:21.738563  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1115 10:52:21.756334  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 10:52:21.756411  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:52:21.773802  615834 provision.go:87] duration metric: took 384.385626ms to configureAuth
	I1115 10:52:21.773827  615834 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:52:21.774007  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:21.774108  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.792181  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:21.792488  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:21.792505  615834 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:52:22.055487  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:52:22.055508  615834 machine.go:97] duration metric: took 4.19318673s to provisionDockerMachine
	I1115 10:52:22.055518  615834 client.go:176] duration metric: took 10.145311721s to LocalClient.Create
	I1115 10:52:22.055558  615834 start.go:167] duration metric: took 10.145409413s to libmachine.API.Create "ha-439113"
	I1115 10:52:22.055565  615834 start.go:293] postStartSetup for "ha-439113" (driver="docker")
	I1115 10:52:22.055575  615834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:52:22.055642  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:52:22.055701  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.074873  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.180822  615834 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:52:22.184110  615834 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:52:22.184181  615834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:52:22.184200  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:52:22.184271  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:52:22.184357  615834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:52:22.184373  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 10:52:22.184487  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:52:22.192120  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:52:22.209855  615834 start.go:296] duration metric: took 154.275573ms for postStartSetup
	I1115 10:52:22.210297  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 10:52:22.229709  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:22.229990  615834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:52:22.230031  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.246845  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.349690  615834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:52:22.354322  615834 start.go:128] duration metric: took 10.447903635s to createHost
	I1115 10:52:22.354345  615834 start.go:83] releasing machines lock for "ha-439113", held for 10.448038496s
	I1115 10:52:22.354414  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 10:52:22.370646  615834 ssh_runner.go:195] Run: cat /version.json
	I1115 10:52:22.370699  615834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:52:22.370785  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.370703  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.391820  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.401038  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.492706  615834 ssh_runner.go:195] Run: systemctl --version
	I1115 10:52:22.586324  615834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:52:22.621059  615834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:52:22.625588  615834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:52:22.625696  615834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:52:22.653803  615834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:52:22.653872  615834 start.go:496] detecting cgroup driver to use...
	I1115 10:52:22.653923  615834 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:52:22.654000  615834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:52:22.671598  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:52:22.684101  615834 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:52:22.684164  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:52:22.701953  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:52:22.720477  615834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:52:22.839197  615834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:52:22.973776  615834 docker.go:234] disabling docker service ...
	I1115 10:52:22.973890  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:52:22.996835  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:52:23.014128  615834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:52:23.134231  615834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:52:23.267304  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:52:23.279966  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:52:23.293982  615834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:52:23.294052  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.303416  615834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:52:23.303487  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.312786  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.321901  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.330667  615834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:52:23.339021  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.347575  615834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.361325  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.370249  615834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:52:23.377894  615834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:52:23.385134  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:52:23.496671  615834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:52:23.627621  615834 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:52:23.627747  615834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:52:23.632590  615834 start.go:564] Will wait 60s for crictl version
	I1115 10:52:23.632707  615834 ssh_runner.go:195] Run: which crictl
	I1115 10:52:23.636316  615834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:52:23.660657  615834 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:52:23.660772  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:52:23.688588  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:52:23.724523  615834 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:52:23.727329  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:52:23.742793  615834 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:52:23.746661  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:52:23.756777  615834 kubeadm.go:884] updating cluster {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:52:23.756916  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:23.756985  615834 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:52:23.791518  615834 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:52:23.791553  615834 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:52:23.791608  615834 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:52:23.816324  615834 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:52:23.816345  615834 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:52:23.816352  615834 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 10:52:23.816457  615834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:52:23.816543  615834 ssh_runner.go:195] Run: crio config
	I1115 10:52:23.871271  615834 cni.go:84] Creating CNI manager for ""
	I1115 10:52:23.871296  615834 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1115 10:52:23.871344  615834 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:52:23.871375  615834 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-439113 NodeName:ha-439113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:52:23.871518  615834 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-439113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:52:23.871550  615834 kube-vip.go:115] generating kube-vip config ...
	I1115 10:52:23.871606  615834 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 10:52:23.883474  615834 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:52:23.883590  615834 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1115 10:52:23.883672  615834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:52:23.891368  615834 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:52:23.891438  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 10:52:23.899042  615834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 10:52:23.911909  615834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:52:23.924778  615834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1115 10:52:23.937611  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1115 10:52:23.950683  615834 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 10:52:23.954252  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:52:23.964098  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:52:24.090640  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:52:24.107612  615834 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.2
	I1115 10:52:24.107684  615834 certs.go:195] generating shared ca certs ...
	I1115 10:52:24.107716  615834 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.107920  615834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:52:24.108024  615834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:52:24.108053  615834 certs.go:257] generating profile certs ...
	I1115 10:52:24.108166  615834 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 10:52:24.108201  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt with IP's: []
	I1115 10:52:24.554437  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt ...
	I1115 10:52:24.554475  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt: {Name:mk438c91bbfdc71ed98bf83a35686eb336e160af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.554716  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key ...
	I1115 10:52:24.554744  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key: {Name:mk02e6816386c2f23446825dc7817e68bb37681f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.554852  615834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e
	I1115 10:52:24.554871  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1115 10:52:24.846690  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e ...
	I1115 10:52:24.846719  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e: {Name:mk6e8b02c721d9233c644f83207024f5d8ec47b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.846896  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e ...
	I1115 10:52:24.846911  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e: {Name:mk550e3639d934c5207f115051431648085f918a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.846992  615834 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 10:52:24.847070  615834 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 10:52:24.847142  615834 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 10:52:24.847158  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt with IP's: []
	I1115 10:52:25.108305  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt ...
	I1115 10:52:25.108333  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt: {Name:mke961fbe90f89a22239bb6958edf2896c46d23c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:25.108521  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key ...
	I1115 10:52:25.108534  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key: {Name:mka3b2de22e0defa33f1fbe91a5aef4867a64317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:25.108626  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 10:52:25.108646  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 10:52:25.108659  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 10:52:25.108675  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 10:52:25.108688  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 10:52:25.108704  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 10:52:25.108742  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 10:52:25.108765  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 10:52:25.108819  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:52:25.108874  615834 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:52:25.108885  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:52:25.108911  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:52:25.108936  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:52:25.108963  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:52:25.109009  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:52:25.109039  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.109061  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.109072  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.109631  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:52:25.130067  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:52:25.148156  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:52:25.167194  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:52:25.185554  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:52:25.204053  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:52:25.222157  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:52:25.241325  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:52:25.258938  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:52:25.276403  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:52:25.294188  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:52:25.312518  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:52:25.325426  615834 ssh_runner.go:195] Run: openssl version
	I1115 10:52:25.331663  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:52:25.340063  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.343617  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.343733  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.384318  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:52:25.392577  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:52:25.400710  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.404422  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.404488  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.445776  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:52:25.454084  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:52:25.462420  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.466760  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.466825  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.507739  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:52:25.516688  615834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:52:25.520465  615834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:52:25.520553  615834 kubeadm.go:401] StartCluster: {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:52:25.520641  615834 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:52:25.520715  615834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:52:25.547927  615834 cri.go:89] found id: ""
	I1115 10:52:25.548044  615834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:52:25.555966  615834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:52:25.563758  615834 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:52:25.563877  615834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:52:25.571839  615834 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:52:25.571860  615834 kubeadm.go:158] found existing configuration files:
	
	I1115 10:52:25.571936  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:52:25.579638  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:52:25.579754  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:52:25.587053  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:52:25.594888  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:52:25.594978  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:52:25.602693  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:52:25.610312  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:52:25.610393  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:52:25.617971  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:52:25.625886  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:52:25.625983  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:52:25.633574  615834 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:52:25.677252  615834 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:52:25.677698  615834 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:52:25.708412  615834 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:52:25.708566  615834 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:52:25.708672  615834 kubeadm.go:319] OS: Linux
	I1115 10:52:25.708752  615834 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:52:25.708819  615834 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:52:25.708896  615834 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:52:25.708957  615834 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:52:25.709016  615834 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:52:25.709075  615834 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:52:25.709130  615834 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:52:25.709188  615834 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:52:25.709245  615834 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:52:25.779557  615834 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:52:25.779756  615834 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:52:25.779911  615834 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:52:25.789262  615834 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:52:25.795756  615834 out.go:252]   - Generating certificates and keys ...
	I1115 10:52:25.795921  615834 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:52:25.796023  615834 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:52:26.163701  615834 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:52:26.496396  615834 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:52:27.022598  615834 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:52:27.803078  615834 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:52:28.032504  615834 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:52:28.032888  615834 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [ha-439113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:52:28.137411  615834 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:52:28.137819  615834 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [ha-439113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:52:29.114848  615834 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:52:29.664327  615834 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:52:29.906078  615834 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:52:29.906403  615834 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:52:32.408567  615834 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:52:32.642398  615834 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:52:33.243645  615834 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:52:33.594554  615834 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:52:33.707496  615834 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:52:33.708305  615834 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:52:33.711007  615834 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:52:33.714419  615834 out.go:252]   - Booting up control plane ...
	I1115 10:52:33.714532  615834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:52:33.714622  615834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:52:33.714697  615834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:52:33.731207  615834 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:52:33.731325  615834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:52:33.738515  615834 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:52:33.738836  615834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:52:33.739039  615834 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:52:33.869273  615834 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:52:33.869411  615834 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:52:35.371019  615834 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501707463s
	I1115 10:52:35.374685  615834 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:52:35.374789  615834 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 10:52:35.374886  615834 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:52:35.374972  615834 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:52:39.892256  615834 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.514709618s
	I1115 10:52:40.863354  615834 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.488586302s
	I1115 10:52:42.879079  615834 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.504323654s
	I1115 10:52:42.898852  615834 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:52:42.914349  615834 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:52:42.936963  615834 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:52:42.937182  615834 kubeadm.go:319] [mark-control-plane] Marking the node ha-439113 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:52:42.949865  615834 kubeadm.go:319] [bootstrap-token] Using token: cozhby.k5651djpc1zqxsaw
	I1115 10:52:42.952786  615834 out.go:252]   - Configuring RBAC rules ...
	I1115 10:52:42.952958  615834 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:52:42.957952  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:52:42.966656  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:52:42.971023  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:52:42.976953  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:52:42.981154  615834 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:52:43.286508  615834 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:52:43.753810  615834 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:52:44.286640  615834 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:52:44.288033  615834 kubeadm.go:319] 
	I1115 10:52:44.288118  615834 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:52:44.288124  615834 kubeadm.go:319] 
	I1115 10:52:44.288205  615834 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:52:44.288209  615834 kubeadm.go:319] 
	I1115 10:52:44.288236  615834 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:52:44.288725  615834 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:52:44.288792  615834 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:52:44.288800  615834 kubeadm.go:319] 
	I1115 10:52:44.288903  615834 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:52:44.288915  615834 kubeadm.go:319] 
	I1115 10:52:44.288965  615834 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:52:44.288973  615834 kubeadm.go:319] 
	I1115 10:52:44.289027  615834 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:52:44.289108  615834 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:52:44.289183  615834 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:52:44.289190  615834 kubeadm.go:319] 
	I1115 10:52:44.289597  615834 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:52:44.289692  615834 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:52:44.289698  615834 kubeadm.go:319] 
	I1115 10:52:44.289852  615834 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cozhby.k5651djpc1zqxsaw \
	I1115 10:52:44.289976  615834 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 10:52:44.290007  615834 kubeadm.go:319] 	--control-plane 
	I1115 10:52:44.290014  615834 kubeadm.go:319] 
	I1115 10:52:44.290104  615834 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:52:44.290113  615834 kubeadm.go:319] 
	I1115 10:52:44.290200  615834 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cozhby.k5651djpc1zqxsaw \
	I1115 10:52:44.290311  615834 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 10:52:44.294927  615834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:52:44.295159  615834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:52:44.295268  615834 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:52:44.295283  615834 cni.go:84] Creating CNI manager for ""
	I1115 10:52:44.295290  615834 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1115 10:52:44.298471  615834 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:52:44.301289  615834 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:52:44.305295  615834 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:52:44.305317  615834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:52:44.317787  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:52:44.607643  615834 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:52:44.607800  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:44.607802  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-439113 minikube.k8s.io/updated_at=2025_11_15T10_52_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=ha-439113 minikube.k8s.io/primary=true
	I1115 10:52:44.622812  615834 ops.go:34] apiserver oom_adj: -16
	I1115 10:52:44.745222  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:45.249009  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:45.746142  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:46.245879  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:46.746300  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:47.246287  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:47.745337  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:48.246229  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:48.745304  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:48.900701  615834 kubeadm.go:1114] duration metric: took 4.292964203s to wait for elevateKubeSystemPrivileges
	I1115 10:52:48.900725  615834 kubeadm.go:403] duration metric: took 23.380179963s to StartCluster
	I1115 10:52:48.900743  615834 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:48.900800  615834 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:52:48.901471  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:48.901687  615834 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:52:48.901714  615834 start.go:242] waiting for startup goroutines ...
	I1115 10:52:48.901721  615834 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:52:48.901780  615834 addons.go:70] Setting storage-provisioner=true in profile "ha-439113"
	I1115 10:52:48.901799  615834 addons.go:239] Setting addon storage-provisioner=true in "ha-439113"
	I1115 10:52:48.901823  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:52:48.902304  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:48.902464  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:52:48.902711  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:48.902750  615834 addons.go:70] Setting default-storageclass=true in profile "ha-439113"
	I1115 10:52:48.902767  615834 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "ha-439113"
	I1115 10:52:48.902994  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:48.930280  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:52:48.930805  615834 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:52:48.930827  615834 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:52:48.930834  615834 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:52:48.930839  615834 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:52:48.930844  615834 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:52:48.931179  615834 addons.go:239] Setting addon default-storageclass=true in "ha-439113"
	I1115 10:52:48.931211  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:52:48.931630  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:48.937022  615834 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 10:52:48.954692  615834 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:52:48.954718  615834 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:52:48.954790  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:48.958729  615834 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:52:48.961757  615834 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:52:48.961785  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:52:48.961851  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:48.991956  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:49.003019  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:49.126632  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:52:49.199038  615834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:52:49.200729  615834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:52:49.476949  615834 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 10:52:49.712750  615834 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:52:49.715645  615834 addons.go:515] duration metric: took 813.901181ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:52:49.715693  615834 start.go:247] waiting for cluster config update ...
	I1115 10:52:49.715707  615834 start.go:256] writing updated cluster config ...
	I1115 10:52:49.718865  615834 out.go:203] 
	I1115 10:52:49.721875  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:49.721967  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:49.725237  615834 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	I1115 10:52:49.728036  615834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:52:49.731102  615834 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:52:49.733925  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:49.733952  615834 cache.go:65] Caching tarball of preloaded images
	I1115 10:52:49.733991  615834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:52:49.734045  615834 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:52:49.734056  615834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:52:49.734161  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:49.753102  615834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:52:49.753125  615834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:52:49.753138  615834 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:52:49.753162  615834 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:52:49.753268  615834 start.go:364] duration metric: took 84.202µs to acquireMachinesLock for "ha-439113-m02"
	I1115 10:52:49.753299  615834 start.go:93] Provisioning new machine with config: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:52:49.753373  615834 start.go:125] createHost starting for "m02" (driver="docker")
	I1115 10:52:49.756892  615834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:52:49.757017  615834 start.go:159] libmachine.API.Create for "ha-439113" (driver="docker")
	I1115 10:52:49.757042  615834 client.go:173] LocalClient.Create starting
	I1115 10:52:49.757111  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:52:49.757147  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:49.757164  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:49.757217  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:52:49.757243  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:49.757253  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:49.757516  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:52:49.774864  615834 network_create.go:77] Found existing network {name:ha-439113 subnet:0x4001ca8120 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1115 10:52:49.774904  615834 kic.go:121] calculated static IP "192.168.49.3" for the "ha-439113-m02" container
	I1115 10:52:49.775009  615834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:52:49.814441  615834 cli_runner.go:164] Run: docker volume create ha-439113-m02 --label name.minikube.sigs.k8s.io=ha-439113-m02 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:52:49.834186  615834 oci.go:103] Successfully created a docker volume ha-439113-m02
	I1115 10:52:49.834270  615834 cli_runner.go:164] Run: docker run --rm --name ha-439113-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m02 --entrypoint /usr/bin/test -v ha-439113-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:52:50.436132  615834 oci.go:107] Successfully prepared a docker volume ha-439113-m02
	I1115 10:52:50.436184  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:50.436195  615834 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:52:50.436263  615834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:52:55.039651  615834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.603345462s)
	I1115 10:52:55.039694  615834 kic.go:203] duration metric: took 4.603495297s to extract preloaded images to volume ...
	W1115 10:52:55.039945  615834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:52:55.040125  615834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:52:55.106173  615834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-439113-m02 --name ha-439113-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-439113-m02 --network ha-439113 --ip 192.168.49.3 --volume ha-439113-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:52:55.420193  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Running}}
	I1115 10:52:55.449058  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:52:55.475245  615834 cli_runner.go:164] Run: docker exec ha-439113-m02 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:52:55.537303  615834 oci.go:144] the created container "ha-439113-m02" has a running status.
	I1115 10:52:55.537331  615834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa...
	I1115 10:52:55.935489  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1115 10:52:55.935608  615834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:52:55.957478  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:52:55.984754  615834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:52:55.984774  615834 kic_runner.go:114] Args: [docker exec --privileged ha-439113-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:52:56.029582  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:52:56.047598  615834 machine.go:94] provisionDockerMachine start ...
	I1115 10:52:56.047704  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:56.065542  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:56.065886  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:52:56.065906  615834 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:52:56.066546  615834 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:52:59.224486  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 10:52:59.224511  615834 ubuntu.go:182] provisioning hostname "ha-439113-m02"
	I1115 10:52:59.224600  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:59.242529  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:59.242842  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:52:59.242860  615834 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
	I1115 10:52:59.402452  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 10:52:59.402599  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:59.419963  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:59.420272  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:52:59.420289  615834 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:52:59.569155  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:52:59.569183  615834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:52:59.569199  615834 ubuntu.go:190] setting up certificates
	I1115 10:52:59.569216  615834 provision.go:84] configureAuth start
	I1115 10:52:59.569292  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:52:59.585645  615834 provision.go:143] copyHostCerts
	I1115 10:52:59.585694  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:59.585729  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:52:59.585740  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:59.585818  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:52:59.585962  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:59.586001  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:52:59.586010  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:59.586111  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:52:59.586179  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:59.586205  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:52:59.586214  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:59.586242  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:52:59.586299  615834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
	I1115 10:52:59.933236  615834 provision.go:177] copyRemoteCerts
	I1115 10:52:59.933311  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:52:59.933366  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:59.951313  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:00.081558  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 10:53:00.081698  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:53:00.144402  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 10:53:00.144483  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:53:00.220295  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 10:53:00.220405  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:53:00.292332  615834 provision.go:87] duration metric: took 723.096278ms to configureAuth
	I1115 10:53:00.292371  615834 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:53:00.292607  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:53:00.303924  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:00.364390  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:53:00.364745  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:53:00.364768  615834 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:53:00.680385  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:53:00.680412  615834 machine.go:97] duration metric: took 4.632794086s to provisionDockerMachine
	I1115 10:53:00.680422  615834 client.go:176] duration metric: took 10.923374181s to LocalClient.Create
	I1115 10:53:00.680433  615834 start.go:167] duration metric: took 10.923416642s to libmachine.API.Create "ha-439113"
	I1115 10:53:00.680440  615834 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
	I1115 10:53:00.680450  615834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:53:00.680514  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:53:00.680559  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:00.701184  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:00.808970  615834 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:53:00.813043  615834 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:53:00.813076  615834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:53:00.813088  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:53:00.813159  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:53:00.813239  615834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:53:00.813249  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 10:53:00.813348  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:53:00.821411  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:53:00.841985  615834 start.go:296] duration metric: took 161.52912ms for postStartSetup
	I1115 10:53:00.842403  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:53:00.861175  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:53:00.861487  615834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:53:00.861634  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:00.879284  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:00.982093  615834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:53:00.986904  615834 start.go:128] duration metric: took 11.233516391s to createHost
	I1115 10:53:00.986931  615834 start.go:83] releasing machines lock for "ha-439113-m02", held for 11.233648971s
	I1115 10:53:00.987002  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:53:01.008778  615834 out.go:179] * Found network options:
	I1115 10:53:01.012367  615834 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 10:53:01.015379  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:53:01.015438  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 10:53:01.015521  615834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:53:01.015568  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:01.015913  615834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:53:01.015966  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:01.041833  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:01.042637  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:01.242849  615834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:53:01.247673  615834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:53:01.247794  615834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:53:01.278117  615834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:53:01.278197  615834 start.go:496] detecting cgroup driver to use...
	I1115 10:53:01.278246  615834 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:53:01.278325  615834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:53:01.296968  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:53:01.310621  615834 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:53:01.310739  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:53:01.328674  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:53:01.348385  615834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:53:01.482770  615834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:53:01.618324  615834 docker.go:234] disabling docker service ...
	I1115 10:53:01.618397  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:53:01.640724  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:53:01.655986  615834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:53:01.791958  615834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:53:01.922418  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:53:01.936296  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:53:01.950481  615834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:53:01.950545  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.959267  615834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:53:01.959380  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.968531  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.977563  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.986850  615834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:53:01.995480  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:02.004423  615834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:02.020832  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:02.032163  615834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:53:02.041833  615834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:53:02.054019  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:53:02.173378  615834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:53:02.316319  615834 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:53:02.316436  615834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:53:02.320549  615834 start.go:564] Will wait 60s for crictl version
	I1115 10:53:02.320660  615834 ssh_runner.go:195] Run: which crictl
	I1115 10:53:02.324799  615834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:53:02.354893  615834 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:53:02.355043  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:53:02.385812  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:53:02.417340  615834 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:53:02.420195  615834 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 10:53:02.422960  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:53:02.441143  615834 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:53:02.445342  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:53:02.455459  615834 mustload.go:66] Loading cluster: ha-439113
	I1115 10:53:02.455671  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:53:02.455920  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:53:02.473394  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:53:02.473671  615834 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
	I1115 10:53:02.473690  615834 certs.go:195] generating shared ca certs ...
	I1115 10:53:02.473706  615834 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:53:02.473835  615834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:53:02.473881  615834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:53:02.473892  615834 certs.go:257] generating profile certs ...
	I1115 10:53:02.473967  615834 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 10:53:02.473999  615834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8
	I1115 10:53:02.474016  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1115 10:53:02.688847  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8 ...
	I1115 10:53:02.688884  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8: {Name:mkb1e34c4420c67bd5263ca2027113dec29d5023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:53:02.689081  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8 ...
	I1115 10:53:02.689099  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8: {Name:mk77433e62660c76c57a09a0de21042793ab4c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:53:02.689184  615834 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 10:53:02.689315  615834 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 10:53:02.689447  615834 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 10:53:02.689464  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 10:53:02.689480  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 10:53:02.689499  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 10:53:02.689515  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 10:53:02.689529  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 10:53:02.689540  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 10:53:02.689551  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 10:53:02.689561  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 10:53:02.689616  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:53:02.689647  615834 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:53:02.689659  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:53:02.689685  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:53:02.689709  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:53:02.689734  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:53:02.689777  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:53:02.689812  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:02.689830  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 10:53:02.689843  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 10:53:02.689900  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:53:02.707023  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:53:02.809240  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 10:53:02.812976  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 10:53:02.821441  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 10:53:02.825073  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 10:53:02.833376  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 10:53:02.836965  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 10:53:02.845363  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 10:53:02.849022  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 10:53:02.857893  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 10:53:02.861635  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 10:53:02.869981  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 10:53:02.873465  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 10:53:02.881678  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:53:02.900782  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:53:02.918838  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:53:02.936798  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:53:02.955025  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1115 10:53:02.974508  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:53:02.992409  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:53:03.015675  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:53:03.035409  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:53:03.054563  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:53:03.072550  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:53:03.090566  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 10:53:03.104801  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 10:53:03.117939  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 10:53:03.130822  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 10:53:03.143834  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 10:53:03.156727  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 10:53:03.170529  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 10:53:03.183731  615834 ssh_runner.go:195] Run: openssl version
	I1115 10:53:03.190745  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:53:03.200536  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:53:03.204392  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:53:03.204457  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:53:03.245496  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:53:03.253893  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:53:03.262357  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:03.266439  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:03.266529  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:03.307492  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:53:03.316145  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:53:03.324975  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:53:03.328846  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:53:03.328991  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:53:03.370097  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:53:03.378718  615834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:53:03.382946  615834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:53:03.383037  615834 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 10:53:03.383130  615834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:53:03.383160  615834 kube-vip.go:115] generating kube-vip config ...
	I1115 10:53:03.383207  615834 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 10:53:03.395337  615834 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:53:03.395446  615834 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 10:53:03.395555  615834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:53:03.403843  615834 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:53:03.403920  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 10:53:03.411984  615834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 10:53:03.425530  615834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:53:03.440951  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 10:53:03.454339  615834 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 10:53:03.458265  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:53:03.469366  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:53:03.584964  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:53:03.603455  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:53:03.603770  615834 start.go:318] joinCluster: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:53:03.603895  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1115 10:53:03.603952  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:53:03.623443  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:53:03.804336  615834 start.go:344] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:53:03.804415  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0tbi8.pbuwwja7os5f0i73 --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I1115 10:53:26.054188  615834 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0tbi8.pbuwwja7os5f0i73 --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (22.249750371s)
	I1115 10:53:26.054265  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1115 10:53:26.438754  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-439113-m02 minikube.k8s.io/updated_at=2025_11_15T10_53_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=ha-439113 minikube.k8s.io/primary=false
	I1115 10:53:26.595421  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-439113-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1115 10:53:26.780779  615834 start.go:320] duration metric: took 23.177004016s to joinCluster
	I1115 10:53:26.780842  615834 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:53:26.781160  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:53:26.783929  615834 out.go:179] * Verifying Kubernetes components...
	I1115 10:53:26.786961  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:53:26.983446  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:53:26.998184  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 10:53:26.998257  615834 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 10:53:26.998526  615834 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
	W1115 10:53:29.002308  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:31.002655  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:33.011921  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:35.501884  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:38.003067  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:40.505715  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:43.002629  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:45.501909  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:47.502152  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:50.002051  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:52.002438  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:54.502175  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:56.504031  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:59.001885  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:01.002028  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:03.003943  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:05.502891  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:07.502959  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:10.002500  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:12.002873  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:14.502243  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:16.502532  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:19.002594  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	I1115 10:54:20.502277  615834 node_ready.go:49] node "ha-439113-m02" is "Ready"
	I1115 10:54:20.502316  615834 node_ready.go:38] duration metric: took 53.503771317s for node "ha-439113-m02" to be "Ready" ...
	I1115 10:54:20.502329  615834 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:54:20.502389  615834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:54:20.514872  615834 api_server.go:72] duration metric: took 53.733982457s to wait for apiserver process to appear ...
	I1115 10:54:20.514895  615834 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:54:20.514914  615834 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 10:54:20.524348  615834 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 10:54:20.528505  615834 api_server.go:141] control plane version: v1.34.1
	I1115 10:54:20.528579  615834 api_server.go:131] duration metric: took 13.676063ms to wait for apiserver health ...
	I1115 10:54:20.528621  615834 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:54:20.533546  615834 system_pods.go:59] 17 kube-system pods found
	I1115 10:54:20.533579  615834 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:54:20.533586  615834 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:54:20.533591  615834 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:54:20.533595  615834 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:54:20.533600  615834 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:54:20.533604  615834 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:54:20.533609  615834 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:54:20.533614  615834 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:54:20.533618  615834 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:54:20.533623  615834 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:54:20.533628  615834 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:54:20.533634  615834 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:54:20.533639  615834 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:54:20.533652  615834 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:54:20.533656  615834 system_pods.go:61] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:54:20.533660  615834 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:54:20.533668  615834 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:54:20.533674  615834 system_pods.go:74] duration metric: took 5.033609ms to wait for pod list to return data ...
	I1115 10:54:20.533684  615834 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:54:20.536782  615834 default_sa.go:45] found service account: "default"
	I1115 10:54:20.536808  615834 default_sa.go:55] duration metric: took 3.117861ms for default service account to be created ...
	I1115 10:54:20.536817  615834 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:54:20.540627  615834 system_pods.go:86] 17 kube-system pods found
	I1115 10:54:20.540658  615834 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:54:20.540664  615834 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:54:20.540669  615834 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:54:20.540673  615834 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:54:20.540679  615834 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:54:20.540683  615834 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:54:20.540687  615834 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:54:20.540691  615834 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:54:20.540697  615834 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:54:20.540701  615834 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:54:20.540736  615834 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:54:20.540747  615834 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:54:20.540751  615834 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:54:20.540755  615834 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:54:20.540759  615834 system_pods.go:89] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:54:20.540763  615834 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:54:20.540767  615834 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:54:20.540780  615834 system_pods.go:126] duration metric: took 3.95687ms to wait for k8s-apps to be running ...
	I1115 10:54:20.540788  615834 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:54:20.540843  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:54:20.564288  615834 system_svc.go:56] duration metric: took 23.490494ms WaitForService to wait for kubelet
	I1115 10:54:20.564316  615834 kubeadm.go:587] duration metric: took 53.783432535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:54:20.564335  615834 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:54:20.569448  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:54:20.569480  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:54:20.569492  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:54:20.569497  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:54:20.569504  615834 node_conditions.go:105] duration metric: took 5.163235ms to run NodePressure ...
	I1115 10:54:20.569515  615834 start.go:242] waiting for startup goroutines ...
	I1115 10:54:20.569540  615834 start.go:256] writing updated cluster config ...
	I1115 10:54:20.573029  615834 out.go:203] 
	I1115 10:54:20.576017  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:20.576141  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:54:20.579475  615834 out.go:179] * Starting "ha-439113-m03" control-plane node in "ha-439113" cluster
	I1115 10:54:20.582281  615834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:54:20.585209  615834 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:54:20.587846  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:54:20.587912  615834 cache.go:65] Caching tarball of preloaded images
	I1115 10:54:20.587882  615834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:54:20.588228  615834 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:54:20.588245  615834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:54:20.588459  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:54:20.612425  615834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:54:20.612449  615834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:54:20.612466  615834 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:54:20.612497  615834 start.go:360] acquireMachinesLock for ha-439113-m03: {Name:mka79aa6495619db3e64a5700d9ed838bd218f87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:54:20.612613  615834 start.go:364] duration metric: took 96.773µs to acquireMachinesLock for "ha-439113-m03"
	I1115 10:54:20.612643  615834 start.go:93] Provisioning new machine with config: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:54:20.612748  615834 start.go:125] createHost starting for "m03" (driver="docker")
	I1115 10:54:20.618177  615834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:54:20.618305  615834 start.go:159] libmachine.API.Create for "ha-439113" (driver="docker")
	I1115 10:54:20.618339  615834 client.go:173] LocalClient.Create starting
	I1115 10:54:20.618426  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:54:20.618465  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:54:20.618483  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:54:20.618539  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:54:20.618560  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:54:20.618570  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:54:20.618824  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:54:20.638778  615834 network_create.go:77] Found existing network {name:ha-439113 subnet:0x4001d2e690 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1115 10:54:20.638818  615834 kic.go:121] calculated static IP "192.168.49.4" for the "ha-439113-m03" container
	I1115 10:54:20.638904  615834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:54:20.657632  615834 cli_runner.go:164] Run: docker volume create ha-439113-m03 --label name.minikube.sigs.k8s.io=ha-439113-m03 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:54:20.675738  615834 oci.go:103] Successfully created a docker volume ha-439113-m03
	I1115 10:54:20.675835  615834 cli_runner.go:164] Run: docker run --rm --name ha-439113-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m03 --entrypoint /usr/bin/test -v ha-439113-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:54:21.209664  615834 oci.go:107] Successfully prepared a docker volume ha-439113-m03
	I1115 10:54:21.209729  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:54:21.209742  615834 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:54:21.209821  615834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:54:25.642090  615834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.432221799s)
	I1115 10:54:25.642125  615834 kic.go:203] duration metric: took 4.432378543s to extract preloaded images to volume ...
	W1115 10:54:25.642270  615834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:54:25.642387  615834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:54:25.703940  615834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-439113-m03 --name ha-439113-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-439113-m03 --network ha-439113 --ip 192.168.49.4 --volume ha-439113-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:54:26.040112  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Running}}
	I1115 10:54:26.066450  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 10:54:26.092550  615834 cli_runner.go:164] Run: docker exec ha-439113-m03 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:54:26.151852  615834 oci.go:144] the created container "ha-439113-m03" has a running status.
	I1115 10:54:26.151878  615834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa...
	I1115 10:54:27.113374  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1115 10:54:27.113470  615834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:54:27.134901  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 10:54:27.152034  615834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:54:27.152059  615834 kic_runner.go:114] Args: [docker exec --privileged ha-439113-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:54:27.195662  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 10:54:27.223784  615834 machine.go:94] provisionDockerMachine start ...
	I1115 10:54:27.223875  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:27.242041  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:27.242447  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:27.242463  615834 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:54:27.243142  615834 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:54:30.397276  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m03
	
	I1115 10:54:30.397299  615834 ubuntu.go:182] provisioning hostname "ha-439113-m03"
	I1115 10:54:30.397373  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:30.416594  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:30.417064  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:30.417081  615834 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m03 && echo "ha-439113-m03" | sudo tee /etc/hostname
	I1115 10:54:30.584566  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m03
	
	I1115 10:54:30.584689  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:30.605012  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:30.605315  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:30.605332  615834 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:54:30.765007  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:54:30.765033  615834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:54:30.765049  615834 ubuntu.go:190] setting up certificates
	I1115 10:54:30.765058  615834 provision.go:84] configureAuth start
	I1115 10:54:30.765121  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 10:54:30.786754  615834 provision.go:143] copyHostCerts
	I1115 10:54:30.786811  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:54:30.786846  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:54:30.786858  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:54:30.786950  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:54:30.787046  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:54:30.787077  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:54:30.787083  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:54:30.787114  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:54:30.787169  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:54:30.787194  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:54:30.787201  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:54:30.787225  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:54:30.787298  615834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m03 san=[127.0.0.1 192.168.49.4 ha-439113-m03 localhost minikube]
	I1115 10:54:31.527679  615834 provision.go:177] copyRemoteCerts
	I1115 10:54:31.527756  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:54:31.527803  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:31.550626  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:31.657012  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 10:54:31.657081  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:54:31.677807  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 10:54:31.677871  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:54:31.700160  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 10:54:31.700222  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:54:31.721356  615834 provision.go:87] duration metric: took 956.283987ms to configureAuth
	I1115 10:54:31.721382  615834 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:54:31.721638  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:31.721743  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:31.746090  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:31.746393  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:31.746414  615834 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:54:32.073048  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:54:32.073072  615834 machine.go:97] duration metric: took 4.849269283s to provisionDockerMachine
	I1115 10:54:32.073081  615834 client.go:176] duration metric: took 11.454730895s to LocalClient.Create
	I1115 10:54:32.073100  615834 start.go:167] duration metric: took 11.454796102s to libmachine.API.Create "ha-439113"
	I1115 10:54:32.073106  615834 start.go:293] postStartSetup for "ha-439113-m03" (driver="docker")
	I1115 10:54:32.073128  615834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:54:32.073207  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:54:32.073254  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.094317  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.205944  615834 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:54:32.211106  615834 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:54:32.211131  615834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:54:32.211141  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:54:32.211196  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:54:32.211273  615834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:54:32.211280  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 10:54:32.211381  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:54:32.220211  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:54:32.239918  615834 start.go:296] duration metric: took 166.785032ms for postStartSetup
	I1115 10:54:32.240282  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 10:54:32.257694  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:54:32.257993  615834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:54:32.258046  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.284964  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.386885  615834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:54:32.392199  615834 start.go:128] duration metric: took 11.779435584s to createHost
	I1115 10:54:32.392225  615834 start.go:83] releasing machines lock for "ha-439113-m03", held for 11.779599443s
	I1115 10:54:32.392307  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 10:54:32.415833  615834 out.go:179] * Found network options:
	I1115 10:54:32.418534  615834 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 10:54:32.421302  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:54:32.421337  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:54:32.421361  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:54:32.421377  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 10:54:32.421453  615834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:54:32.421499  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.421776  615834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:54:32.421830  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.447145  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.460578  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.608958  615834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:54:32.674684  615834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:54:32.674759  615834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:54:32.704172  615834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:54:32.704198  615834 start.go:496] detecting cgroup driver to use...
	I1115 10:54:32.704232  615834 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:54:32.704283  615834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:54:32.723324  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:54:32.737729  615834 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:54:32.737795  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:54:32.756038  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:54:32.775957  615834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:54:32.915213  615834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:54:33.052803  615834 docker.go:234] disabling docker service ...
	I1115 10:54:33.052907  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:54:33.078043  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:54:33.094926  615834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:54:33.230549  615834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:54:33.359393  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:54:33.372746  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:54:33.388589  615834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:54:33.388660  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.400545  615834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:54:33.400613  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.412976  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.422489  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.431851  615834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:54:33.441690  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.452824  615834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.469461  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.479689  615834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:54:33.487650  615834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:54:33.495844  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:54:33.622741  615834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:54:33.759405  615834 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:54:33.759527  615834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:54:33.763550  615834 start.go:564] Will wait 60s for crictl version
	I1115 10:54:33.763664  615834 ssh_runner.go:195] Run: which crictl
	I1115 10:54:33.767583  615834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:54:33.803949  615834 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:54:33.804117  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:54:33.834618  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:54:33.872348  615834 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:54:33.875141  615834 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 10:54:33.878028  615834 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 10:54:33.880834  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:54:33.898757  615834 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:54:33.902716  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:54:33.913050  615834 mustload.go:66] Loading cluster: ha-439113
	I1115 10:54:33.913297  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:33.913562  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:54:33.932916  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:54:33.933195  615834 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.4
	I1115 10:54:33.933212  615834 certs.go:195] generating shared ca certs ...
	I1115 10:54:33.933228  615834 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:54:33.933349  615834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:54:33.933400  615834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:54:33.933414  615834 certs.go:257] generating profile certs ...
	I1115 10:54:33.933496  615834 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 10:54:33.933533  615834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1
	I1115 10:54:33.933550  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1115 10:54:34.392462  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1 ...
	I1115 10:54:34.392493  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1: {Name:mk57469c45faf40e8877724cc1e54dca438fdabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:54:34.392690  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1 ...
	I1115 10:54:34.392707  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1: {Name:mke21c76fcddbd31cd7b88d6b0fe560b003ef850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:54:34.392820  615834 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 10:54:34.392987  615834 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 10:54:34.393123  615834 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 10:54:34.393142  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 10:54:34.393159  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 10:54:34.393180  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 10:54:34.393192  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 10:54:34.393210  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 10:54:34.393228  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 10:54:34.393240  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 10:54:34.393258  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 10:54:34.393313  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:54:34.393346  615834 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:54:34.393360  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:54:34.393384  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:54:34.393407  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:54:34.393437  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:54:34.393481  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:54:34.393513  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.393530  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:34.393541  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 10:54:34.393601  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:54:34.417321  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:54:34.517227  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 10:54:34.521295  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 10:54:34.529863  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 10:54:34.533585  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 10:54:34.542122  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 10:54:34.545856  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 10:54:34.554443  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 10:54:34.558198  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 10:54:34.567184  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 10:54:34.570858  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 10:54:34.579554  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 10:54:34.583242  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 10:54:34.592001  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:54:34.611784  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:54:34.632105  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:54:34.651510  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:54:34.679392  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1115 10:54:34.701186  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:54:34.721218  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:54:34.739838  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:54:34.758607  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:54:34.783858  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:54:34.804612  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:54:34.823088  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 10:54:34.836703  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 10:54:34.856372  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 10:54:34.869724  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 10:54:34.884327  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 10:54:34.898714  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 10:54:34.912320  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 10:54:34.928648  615834 ssh_runner.go:195] Run: openssl version
	I1115 10:54:34.936171  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:54:34.944931  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.949201  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.949309  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.990696  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:54:34.999528  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:54:35.008123  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:35.014850  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:35.014942  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:35.065078  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:54:35.074023  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:54:35.082594  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:54:35.086579  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:54:35.086700  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:54:35.128687  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:54:35.137533  615834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:54:35.141312  615834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:54:35.141414  615834 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1115 10:54:35.141513  615834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:54:35.141549  615834 kube-vip.go:115] generating kube-vip config ...
	I1115 10:54:35.141607  615834 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 10:54:35.154374  615834 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:54:35.154479  615834 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 10:54:35.154575  615834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:54:35.162846  615834 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:54:35.162920  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 10:54:35.171422  615834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 10:54:35.184497  615834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:54:35.198720  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 10:54:35.221440  615834 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 10:54:35.226193  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:54:35.237290  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:54:35.357843  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:54:35.375474  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:54:35.375810  615834 start.go:318] joinCluster: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:54:35.375986  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1115 10:54:35.376046  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:54:35.394797  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:54:35.573637  615834 start.go:344] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:54:35.573737  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 57tqrb.oxnolth70l2ucbah --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I1115 10:54:57.764910  615834 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 57tqrb.oxnolth70l2ucbah --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (22.191150768s)
	I1115 10:54:57.764977  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1115 10:54:58.434841  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-439113-m03 minikube.k8s.io/updated_at=2025_11_15T10_54_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=ha-439113 minikube.k8s.io/primary=false
	I1115 10:54:58.570609  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-439113-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1115 10:54:58.705795  615834 start.go:320] duration metric: took 23.329979868s to joinCluster
	I1115 10:54:58.705850  615834 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:54:58.706784  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:58.708964  615834 out.go:179] * Verifying Kubernetes components...
	I1115 10:54:58.711919  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:54:58.903183  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:54:58.919643  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 10:54:58.919719  615834 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 10:54:58.920020  615834 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m03" to be "Ready" ...
	W1115 10:55:00.924525  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:03.423536  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:05.426488  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:07.924192  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:09.924349  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:12.424510  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:14.924474  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:17.423576  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:19.424146  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:21.923891  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:23.924593  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:25.924801  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:27.925292  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:29.926056  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:32.423860  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:34.424326  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:36.923529  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:38.924051  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:41.423264  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	I1115 10:55:42.423952  615834 node_ready.go:49] node "ha-439113-m03" is "Ready"
	I1115 10:55:42.423989  615834 node_ready.go:38] duration metric: took 43.503945735s for node "ha-439113-m03" to be "Ready" ...
	I1115 10:55:42.424005  615834 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:55:42.424111  615834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:55:42.440197  615834 api_server.go:72] duration metric: took 43.734318984s to wait for apiserver process to appear ...
	I1115 10:55:42.440226  615834 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:55:42.440245  615834 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 10:55:42.448913  615834 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 10:55:42.449893  615834 api_server.go:141] control plane version: v1.34.1
	I1115 10:55:42.449917  615834 api_server.go:131] duration metric: took 9.68478ms to wait for apiserver health ...
	I1115 10:55:42.449926  615834 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:55:42.456190  615834 system_pods.go:59] 24 kube-system pods found
	I1115 10:55:42.456222  615834 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:55:42.456229  615834 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:55:42.456234  615834 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:55:42.456238  615834 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:55:42.456243  615834 system_pods.go:61] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 10:55:42.456249  615834 system_pods.go:61] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 10:55:42.456259  615834 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:55:42.456264  615834 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:55:42.456271  615834 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:55:42.456276  615834 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:55:42.456289  615834 system_pods.go:61] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 10:55:42.456294  615834 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:55:42.456299  615834 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:55:42.456304  615834 system_pods.go:61] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 10:55:42.456313  615834 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:55:42.456317  615834 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:55:42.456321  615834 system_pods.go:61] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 10:55:42.456326  615834 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:55:42.456331  615834 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:55:42.456335  615834 system_pods.go:61] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 10:55:42.456343  615834 system_pods.go:61] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:55:42.456347  615834 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:55:42.456353  615834 system_pods.go:61] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 10:55:42.456358  615834 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:55:42.456366  615834 system_pods.go:74] duration metric: took 6.434166ms to wait for pod list to return data ...
	I1115 10:55:42.456381  615834 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:55:42.460030  615834 default_sa.go:45] found service account: "default"
	I1115 10:55:42.460053  615834 default_sa.go:55] duration metric: took 3.666881ms for default service account to be created ...
	I1115 10:55:42.460063  615834 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:55:42.466296  615834 system_pods.go:86] 24 kube-system pods found
	I1115 10:55:42.466327  615834 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:55:42.466334  615834 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:55:42.466339  615834 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:55:42.466343  615834 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:55:42.466347  615834 system_pods.go:89] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 10:55:42.466352  615834 system_pods.go:89] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 10:55:42.466357  615834 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:55:42.466361  615834 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:55:42.466371  615834 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:55:42.466376  615834 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:55:42.466383  615834 system_pods.go:89] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 10:55:42.466387  615834 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:55:42.466397  615834 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:55:42.466402  615834 system_pods.go:89] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 10:55:42.466408  615834 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:55:42.466412  615834 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:55:42.466425  615834 system_pods.go:89] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 10:55:42.466430  615834 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:55:42.466434  615834 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:55:42.466441  615834 system_pods.go:89] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 10:55:42.466445  615834 system_pods.go:89] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:55:42.466449  615834 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:55:42.466453  615834 system_pods.go:89] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 10:55:42.466459  615834 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:55:42.466465  615834 system_pods.go:126] duration metric: took 6.39762ms to wait for k8s-apps to be running ...
	I1115 10:55:42.466477  615834 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:55:42.466532  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:55:42.483083  615834 system_svc.go:56] duration metric: took 16.595924ms WaitForService to wait for kubelet
	I1115 10:55:42.483109  615834 kubeadm.go:587] duration metric: took 43.777236154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:55:42.483126  615834 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:55:42.486588  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:55:42.486618  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:55:42.486630  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:55:42.486634  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:55:42.486639  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:55:42.486643  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:55:42.486648  615834 node_conditions.go:105] duration metric: took 3.516274ms to run NodePressure ...
	I1115 10:55:42.486661  615834 start.go:242] waiting for startup goroutines ...
	I1115 10:55:42.486686  615834 start.go:256] writing updated cluster config ...
	I1115 10:55:42.487017  615834 ssh_runner.go:195] Run: rm -f paused
	I1115 10:55:42.492297  615834 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:55:42.492803  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:55:42.512652  615834 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.521118  615834 pod_ready.go:94] pod "coredns-66bc5c9577-4g6sm" is "Ready"
	I1115 10:55:42.521148  615834 pod_ready.go:86] duration metric: took 8.46948ms for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.521159  615834 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.530647  615834 pod_ready.go:94] pod "coredns-66bc5c9577-mlm6m" is "Ready"
	I1115 10:55:42.530675  615834 pod_ready.go:86] duration metric: took 9.510034ms for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.534052  615834 pod_ready.go:83] waiting for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.540869  615834 pod_ready.go:94] pod "etcd-ha-439113" is "Ready"
	I1115 10:55:42.540905  615834 pod_ready.go:86] duration metric: took 6.827976ms for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.540914  615834 pod_ready.go:83] waiting for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.547047  615834 pod_ready.go:94] pod "etcd-ha-439113-m02" is "Ready"
	I1115 10:55:42.547075  615834 pod_ready.go:86] duration metric: took 6.153818ms for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.547085  615834 pod_ready.go:83] waiting for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.694101  615834 request.go:683] "Waited before sending request" delay="146.197061ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-439113-m03"
	I1115 10:55:42.893874  615834 request.go:683] "Waited before sending request" delay="196.290075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:42.897978  615834 pod_ready.go:94] pod "etcd-ha-439113-m03" is "Ready"
	I1115 10:55:42.898008  615834 pod_ready.go:86] duration metric: took 350.916746ms for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.093314  615834 request.go:683] "Waited before sending request" delay="195.208873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 10:55:43.097536  615834 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.293993  615834 request.go:683] "Waited before sending request" delay="196.352501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113"
	I1115 10:55:43.493646  615834 request.go:683] "Waited before sending request" delay="196.260142ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:43.496635  615834 pod_ready.go:94] pod "kube-apiserver-ha-439113" is "Ready"
	I1115 10:55:43.496659  615834 pod_ready.go:86] duration metric: took 399.090863ms for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.496669  615834 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.694042  615834 request.go:683] "Waited before sending request" delay="197.29025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m02"
	I1115 10:55:43.893795  615834 request.go:683] "Waited before sending request" delay="196.360467ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:43.897823  615834 pod_ready.go:94] pod "kube-apiserver-ha-439113-m02" is "Ready"
	I1115 10:55:43.897863  615834 pod_ready.go:86] duration metric: took 401.185344ms for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.897873  615834 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.094259  615834 request.go:683] "Waited before sending request" delay="196.313772ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m03"
	I1115 10:55:44.293320  615834 request.go:683] "Waited before sending request" delay="195.273875ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:44.297020  615834 pod_ready.go:94] pod "kube-apiserver-ha-439113-m03" is "Ready"
	I1115 10:55:44.297051  615834 pod_ready.go:86] duration metric: took 399.170241ms for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.493352  615834 request.go:683] "Waited before sending request" delay="196.168342ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 10:55:44.497474  615834 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.693864  615834 request.go:683] "Waited before sending request" delay="196.265788ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 10:55:44.893500  615834 request.go:683] "Waited before sending request" delay="196.2207ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:44.897637  615834 pod_ready.go:94] pod "kube-controller-manager-ha-439113" is "Ready"
	I1115 10:55:44.897665  615834 pod_ready.go:86] duration metric: took 400.156714ms for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.897677  615834 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.096158  615834 request.go:683] "Waited before sending request" delay="198.388069ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113-m02"
	I1115 10:55:45.294224  615834 request.go:683] "Waited before sending request" delay="191.268446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:45.298265  615834 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m02" is "Ready"
	I1115 10:55:45.298296  615834 pod_ready.go:86] duration metric: took 400.61198ms for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.298307  615834 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.493762  615834 request.go:683] "Waited before sending request" delay="195.347377ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113-m03"
	I1115 10:55:45.693315  615834 request.go:683] "Waited before sending request" delay="196.157273ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:45.696935  615834 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m03" is "Ready"
	I1115 10:55:45.696960  615834 pod_ready.go:86] duration metric: took 398.646459ms for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.893314  615834 request.go:683] "Waited before sending request" delay="196.244659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1115 10:55:45.898174  615834 pod_ready.go:83] waiting for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.093461  615834 request.go:683] "Waited before sending request" delay="195.191183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7bcn"
	I1115 10:55:46.293301  615834 request.go:683] "Waited before sending request" delay="196.162237ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:46.297337  615834 pod_ready.go:94] pod "kube-proxy-k7bcn" is "Ready"
	I1115 10:55:46.297371  615834 pod_ready.go:86] duration metric: took 399.168321ms for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.297380  615834 pod_ready.go:83] waiting for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.493781  615834 request.go:683] "Waited before sending request" delay="196.313435ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgftx"
	I1115 10:55:46.693593  615834 request.go:683] "Waited before sending request" delay="196.546283ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:46.699555  615834 pod_ready.go:94] pod "kube-proxy-kgftx" is "Ready"
	I1115 10:55:46.699584  615834 pod_ready.go:86] duration metric: took 402.19773ms for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.699594  615834 pod_ready.go:83] waiting for pod "kube-proxy-njlxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.893960  615834 request.go:683] "Waited before sending request" delay="194.292628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-njlxj"
	I1115 10:55:47.093699  615834 request.go:683] "Waited before sending request" delay="196.242706ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:47.097062  615834 pod_ready.go:94] pod "kube-proxy-njlxj" is "Ready"
	I1115 10:55:47.097099  615834 pod_ready.go:86] duration metric: took 397.498607ms for pod "kube-proxy-njlxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.293346  615834 request.go:683] "Waited before sending request" delay="196.125125ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1115 10:55:47.297041  615834 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.493398  615834 request.go:683] "Waited before sending request" delay="196.251543ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113"
	I1115 10:55:47.694024  615834 request.go:683] "Waited before sending request" delay="197.311831ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:47.697567  615834 pod_ready.go:94] pod "kube-scheduler-ha-439113" is "Ready"
	I1115 10:55:47.697592  615834 pod_ready.go:86] duration metric: took 400.52343ms for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.697602  615834 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.894055  615834 request.go:683] "Waited before sending request" delay="196.361846ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 10:55:48.093904  615834 request.go:683] "Waited before sending request" delay="195.321687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:48.097248  615834 pod_ready.go:94] pod "kube-scheduler-ha-439113-m02" is "Ready"
	I1115 10:55:48.097282  615834 pod_ready.go:86] duration metric: took 399.672892ms for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:48.097293  615834 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:48.293745  615834 request.go:683] "Waited before sending request" delay="196.348718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m03"
	I1115 10:55:48.493608  615834 request.go:683] "Waited before sending request" delay="196.332299ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:48.496763  615834 pod_ready.go:94] pod "kube-scheduler-ha-439113-m03" is "Ready"
	I1115 10:55:48.496836  615834 pod_ready.go:86] duration metric: took 399.525477ms for pod "kube-scheduler-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:48.496916  615834 pod_ready.go:40] duration metric: took 6.00458265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:55:48.566701  615834 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:55:48.569888  615834 out.go:179] * Done! kubectl is now configured to use "ha-439113" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:53:32 ha-439113 crio[839]: time="2025-11-15T10:53:32.212255023Z" level=info msg="Created container ebc82b2592dea9050aa85b52fa9673230a41ffc541b1a9be7f57add5a41661ef: kube-system/storage-provisioner/storage-provisioner" id=c517296d-bd9b-4dd7-ad7d-ff27ad3f16a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:53:32 ha-439113 crio[839]: time="2025-11-15T10:53:32.213468255Z" level=info msg="Starting container: ebc82b2592dea9050aa85b52fa9673230a41ffc541b1a9be7f57add5a41661ef" id=80cb212e-b4cc-44ae-8599-da6627d6502b name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:53:32 ha-439113 crio[839]: time="2025-11-15T10:53:32.215539196Z" level=info msg="Started container" PID=1833 containerID=ebc82b2592dea9050aa85b52fa9673230a41ffc541b1a9be7f57add5a41661ef description=kube-system/storage-provisioner/storage-provisioner id=80cb212e-b4cc-44ae-8599-da6627d6502b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b490c9b037c7b899eacaef5b671bb76b4b6a5cd04156c3467f671dd334f6b230
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.631309762Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-vddcm/POD" id=114264a6-9671-4fa2-9ed7-ad5ab056ed9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.631381419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.64104073Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-vddcm Namespace:default ID:9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 UID:92adc10b-e910-45d1-8267-ee2e884d0dcc NetNS:/var/run/netns/e82fa3ee-f2c7-4bec-bc77-3640c59596cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000138570}] Aliases:map[]}"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.641091052Z" level=info msg="Adding pod default_busybox-7b57f96db7-vddcm to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.661653537Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-vddcm Namespace:default ID:9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 UID:92adc10b-e910-45d1-8267-ee2e884d0dcc NetNS:/var/run/netns/e82fa3ee-f2c7-4bec-bc77-3640c59596cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000138570}] Aliases:map[]}"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.662092549Z" level=info msg="Checking pod default_busybox-7b57f96db7-vddcm for CNI network kindnet (type=ptp)"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.667262676Z" level=info msg="Ran pod sandbox 9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 with infra container: default/busybox-7b57f96db7-vddcm/POD" id=114264a6-9671-4fa2-9ed7-ad5ab056ed9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.668959811Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=8ecc1fc7-1996-45cc-9d8e-ac6a7fd74c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.669233808Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=8ecc1fc7-1996-45cc-9d8e-ac6a7fd74c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.669282695Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=8ecc1fc7-1996-45cc-9d8e-ac6a7fd74c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.67127745Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=e4852942-9d64-4f44-8ef8-40df183d7f24 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.676540748Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.775299322Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=e4852942-9d64-4f44-8ef8-40df183d7f24 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.777574318Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c4332747-14d1-4cdb-ae76-b4cb071a9e81 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.779308927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=87342736-f913-41d9-a9a6-1048cf8ee9e0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.784697674Z" level=info msg="Creating container: default/busybox-7b57f96db7-vddcm/busybox" id=4c7f054c-47ae-4bfe-8ca3-cfc91b62c944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.785046347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.79768124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.801962496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.821311715Z" level=info msg="Created container 3f6eb171bd0175882d73d20d75a54b3a72cb956bd407e8095a60998cd1a10870: default/busybox-7b57f96db7-vddcm/busybox" id=4c7f054c-47ae-4bfe-8ca3-cfc91b62c944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.822504287Z" level=info msg="Starting container: 3f6eb171bd0175882d73d20d75a54b3a72cb956bd407e8095a60998cd1a10870" id=d2311b24-aa7d-4466-af8b-25c404bf84c7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.825172125Z" level=info msg="Started container" PID=2006 containerID=3f6eb171bd0175882d73d20d75a54b3a72cb956bd407e8095a60998cd1a10870 description=default/busybox-7b57f96db7-vddcm/busybox id=d2311b24-aa7d-4466-af8b-25c404bf84c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3f6eb171bd017       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   10 minutes ago      Running             busybox                   0                   9a1924d1444fc       busybox-7b57f96db7-vddcm            default
	e034410e44a50       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 minutes ago      Running             coredns                   0                   220741ce57653       coredns-66bc5c9577-mlm6m            kube-system
	ebc82b2592dea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 minutes ago      Running             storage-provisioner       0                   b490c9b037c7b       storage-provisioner                 kube-system
	1bba46622cf08       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 minutes ago      Running             coredns                   0                   14afa271db53e       coredns-66bc5c9577-4g6sm            kube-system
	c5041c1c9a7b2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      13 minutes ago      Running             kindnet-cni               0                   929899784c659       kindnet-q4kpj                       kube-system
	32eb60c7f45d9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      13 minutes ago      Running             kube-proxy                0                   fb96f9b749aa4       kube-proxy-k7bcn                    kube-system
	f6362682174af       ghcr.io/kube-vip/kube-vip@sha256:a9c131fb1bd4690cd4563761c2f545eb89b92cc8ea19aec96c833d1b4b0211eb     14 minutes ago      Running             kube-vip                  0                   28ed11a5928a1       kube-vip-ha-439113                  kube-system
	3460218d601a4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      14 minutes ago      Running             kube-scheduler            0                   1917432d67012       kube-scheduler-ha-439113            kube-system
	12d1c250e31ea       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      14 minutes ago      Running             kube-controller-manager   0                   ecd91e2412183       kube-controller-manager-ha-439113   kube-system
	07ac2a5381c76       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      14 minutes ago      Running             kube-apiserver            0                   e329af05eba97       kube-apiserver-ha-439113            kube-system
	f4035d6f71e56       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      14 minutes ago      Running             etcd                      0                   f73860106416b       etcd-ha-439113                      kube-system
	
	
	==> coredns [1bba46622cf0862562b963eed4ad3b12dbcc4badddbf0a0b56dee4a1b3c9b955] <==
	[INFO] 10.244.2.2:35766 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000098167s
	[INFO] 10.244.1.3:60510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154644s
	[INFO] 10.244.1.3:39741 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002283168s
	[INFO] 10.244.1.3:54024 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130906s
	[INFO] 10.244.1.3:55209 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105027s
	[INFO] 10.244.1.3:35197 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001586463s
	[INFO] 10.244.1.3:45473 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115276s
	[INFO] 10.244.1.3:34424 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106496s
	[INFO] 10.244.0.4:37123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009299s
	[INFO] 10.244.0.4:49387 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246841s
	[INFO] 10.244.0.4:38072 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001117117s
	[INFO] 10.244.2.2:33563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000252852s
	[INFO] 10.244.2.2:40237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125253s
	[INFO] 10.244.1.3:34350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126508s
	[INFO] 10.244.1.3:39952 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131957s
	[INFO] 10.244.1.3:38822 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112092s
	[INFO] 10.244.0.4:45556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157327s
	[INFO] 10.244.0.4:57618 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145249s
	[INFO] 10.244.2.2:33582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106922s
	[INFO] 10.244.2.2:37235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165556s
	[INFO] 10.244.1.3:39333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132473s
	[INFO] 10.244.1.3:52420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000092711s
	[INFO] 10.244.0.4:51209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068202s
	[INFO] 10.244.0.4:54534 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090282s
	[INFO] 10.244.2.2:51431 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069769s
	
	
	==> coredns [e034410e44a50c4b37d4c79d28f641bcd3feafc9353b925fffc80b38b5c23d67] <==
	[INFO] 10.244.2.2:58996 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000063582s
	[INFO] 10.244.1.3:43592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156514s
	[INFO] 10.244.0.4:46132 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131341s
	[INFO] 10.244.0.4:43399 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103583s
	[INFO] 10.244.0.4:40629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097371s
	[INFO] 10.244.0.4:46835 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093392s
	[INFO] 10.244.0.4:56743 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063279s
	[INFO] 10.244.2.2:53643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122127s
	[INFO] 10.244.2.2:33972 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001226434s
	[INFO] 10.244.2.2:35377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169536s
	[INFO] 10.244.2.2:47011 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138242s
	[INFO] 10.244.2.2:42897 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001142306s
	[INFO] 10.244.2.2:33366 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159305s
	[INFO] 10.244.1.3:56891 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120305s
	[INFO] 10.244.0.4:47049 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221s
	[INFO] 10.244.0.4:47618 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073724s
	[INFO] 10.244.2.2:35579 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215461s
	[INFO] 10.244.2.2:44191 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00017619s
	[INFO] 10.244.1.3:45635 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185799s
	[INFO] 10.244.1.3:37107 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145857s
	[INFO] 10.244.0.4:48143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147284s
	[INFO] 10.244.0.4:55785 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084629s
	[INFO] 10.244.2.2:41258 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111205s
	[INFO] 10.244.2.2:40201 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120871s
	[INFO] 10.244.2.2:44090 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084333s
	
	
	==> describe nodes <==
	Name:               ha-439113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_52_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:52:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:06:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:53:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-439113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6518a9f9-bb2d-42ae-b78a-3db01b5306a4
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vddcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-4g6sm             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-mlm6m             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-439113                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-q4kpj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-439113             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-439113    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-k7bcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-439113             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-439113                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 13m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           13m   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeReady                13m   kubelet          Node ha-439113 status is now: NodeReady
	  Normal   RegisteredNode           11m   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	
	
	Name:               ha-439113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_53_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:53:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:57:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-439113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d3455c64-e9a7-4ebe-b716-3cc9dc8ab51a
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5xw75                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-439113-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-mcj42                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-439113-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-439113-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-kgftx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-439113-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-439113-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        13m   kube-proxy       
	  Normal  RegisteredNode  13m   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal  NodeNotReady    8m9s  node-controller  Node ha-439113-m02 status is now: NodeNotReady
	
	
	Name:               ha-439113-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_54_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:54:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:06:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:05:40 +0000   Sat, 15 Nov 2025 10:54:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:05:40 +0000   Sat, 15 Nov 2025 10:54:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:05:40 +0000   Sat, 15 Nov 2025 10:54:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:05:40 +0000   Sat, 15 Nov 2025 10:55:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-439113-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                a83b5435-8c2a-4b27-b1ef-b4733d66b86e
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vk6xz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-439113-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-kxl4t                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-439113-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-439113-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-njlxj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-439113-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-439113-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        11m   kube-proxy       
	  Normal  RegisteredNode  11m   node-controller  Node ha-439113-m03 event: Registered Node ha-439113-m03 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node ha-439113-m03 event: Registered Node ha-439113-m03 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node ha-439113-m03 event: Registered Node ha-439113-m03 in Controller
	
	
	Name:               ha-439113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_56_52_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:56:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:06:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:04:00 +0000   Sat, 15 Nov 2025 10:56:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:04:00 +0000   Sat, 15 Nov 2025 10:56:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:04:00 +0000   Sat, 15 Nov 2025 10:56:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:04:00 +0000   Sat, 15 Nov 2025 10:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-439113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                bf4456d3-e8dc-4a97-8e4f-cb829c9a4b90
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-trswm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  kube-system                 kindnet-4k2k2               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m58s
	  kube-system                 kube-proxy-2fgtm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m57s                  kube-proxy       
	  Normal   Starting                 9m59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           9m58s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasSufficientMemory  9m58s (x3 over 9m59s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m58s (x3 over 9m59s)  kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m58s (x3 over 9m59s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m57s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           9m54s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeReady                9m16s                  kubelet          Node ha-439113-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[Nov15 10:39] overlayfs: idmapped layers are currently not supported
	[Nov15 10:52] overlayfs: idmapped layers are currently not supported
	[Nov15 10:53] overlayfs: idmapped layers are currently not supported
	[Nov15 10:54] overlayfs: idmapped layers are currently not supported
	[Nov15 10:56] overlayfs: idmapped layers are currently not supported
	[Nov15 10:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f4035d6f71e56ba53b8d8060485a468d1faf9b1a3bdfedd8aa7da86be584ec11] <==
	{"level":"warn","ts":"2025-11-15T10:59:38.284419Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:40.514810Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:40.514867Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:43.285400Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:43.285412Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"10ee04674cfb0a09","rtt":"20.709005ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:44.516442Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:44.516498Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.285675Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"10ee04674cfb0a09","rtt":"20.709005ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.285682Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.517898Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.518036Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:52.519935Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:52.519998Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:53.288377Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:53.288448Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"10ee04674cfb0a09","rtt":"20.709005ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-15T10:59:53.394242Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"10ee04674cfb0a09","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-15T10:59:53.394297Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.394315Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.449798Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"10ee04674cfb0a09","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-15T10:59:53.449959Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.478088Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.482447Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T11:02:37.720701Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1629}
	{"level":"info","ts":"2025-11-15T11:02:37.758076Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1629,"took":"36.86288ms","hash":2376258938,"current-db-size-bytes":4849664,"current-db-size":"4.8 MB","current-db-size-in-use-bytes":3022848,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-11-15T11:02:37.758132Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2376258938,"revision":1629,"compact-revision":-1}
	
	
	==> kernel <==
	 11:06:51 up  2:49,  0 user,  load average: 0.87, 1.01, 1.33
	Linux ha-439113 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c5041c1c9a7b23abf75df1eb1474d03e4c704bf14133dc981ee08a378b3e3397] <==
	I1115 11:06:11.301558       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:06:21.299351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:06:21.299391       1 main.go:301] handling current node
	I1115 11:06:21.299414       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:06:21.299421       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:06:21.299623       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 11:06:21.299632       1 main.go:324] Node ha-439113-m03 has CIDR [10.244.2.0/24] 
	I1115 11:06:21.299751       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:06:21.299758       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:06:31.303570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:06:31.303603       1 main.go:301] handling current node
	I1115 11:06:31.303621       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:06:31.303627       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:06:31.304021       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 11:06:31.304040       1 main.go:324] Node ha-439113-m03 has CIDR [10.244.2.0/24] 
	I1115 11:06:31.304273       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:06:31.304346       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:06:41.304231       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:06:41.304273       1 main.go:301] handling current node
	I1115 11:06:41.304288       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:06:41.304294       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:06:41.304467       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 11:06:41.304480       1 main.go:324] Node ha-439113-m03 has CIDR [10.244.2.0/24] 
	I1115 11:06:41.304546       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:06:41.304557       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63] <==
	I1115 10:52:42.358842       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:52:42.427732       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:52:42.578242       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:52:42.586911       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1115 10:52:42.588308       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:52:42.593909       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:52:42.739735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:52:43.727370       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:52:43.749842       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:52:43.764070       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:52:47.895686       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:52:48.597560       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:52:48.602925       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:52:48.744987       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1115 10:56:32.235267       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37242: use of closed network connection
	E1115 10:56:32.465177       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37262: use of closed network connection
	E1115 10:56:32.933377       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37298: use of closed network connection
	E1115 10:56:33.367452       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37338: use of closed network connection
	E1115 10:56:33.585138       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37360: use of closed network connection
	E1115 10:56:33.989276       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37390: use of closed network connection
	E1115 10:56:34.244482       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37412: use of closed network connection
	E1115 10:56:34.492173       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37444: use of closed network connection
	E1115 10:56:35.101366       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37488: use of closed network connection
	W1115 10:58:12.601678       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1115 11:02:40.821095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [12d1c250e31ea78318f046f42fa718353d22cf0f3dd2a251f9cbcdfbdbabd3a3] <==
	I1115 10:52:47.788439       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:52:47.788989       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:52:47.789070       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:52:47.791552       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:52:47.792313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:52:47.792374       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:52:47.795130       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:52:47.792745       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:52:47.792733       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:52:47.801484       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:52:47.803070       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:53:25.873159       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-439113-m02\" does not exist"
	I1115 10:53:25.932933       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-439113-m02" podCIDRs=["10.244.1.0/24"]
	I1115 10:53:27.742756       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-439113-m02"
	I1115 10:53:32.743622       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1115 10:54:57.108169       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-p5cmb failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-p5cmb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1115 10:54:57.525205       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-439113-m03\" does not exist"
	I1115 10:54:57.562807       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-439113-m03" podCIDRs=["10.244.2.0/24"]
	I1115 10:54:57.784268       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-439113-m03"
	I1115 10:56:52.148374       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-439113-m04\" does not exist"
	I1115 10:56:52.176237       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-439113-m04" podCIDRs=["10.244.3.0/24"]
	I1115 10:56:52.827320       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-439113-m04"
	I1115 10:57:34.083635       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	I1115 10:58:41.325391       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	I1115 11:03:41.408881       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-5xw75"
	
	
	==> kube-proxy [32eb60c7f45d998b805a27e4338741aca603eaf9a27e0a65e24b5cf620344940] <==
	I1115 10:52:51.089191       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:52:51.198904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:52:51.304092       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:52:51.304125       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 10:52:51.304192       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:52:51.394214       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:52:51.394269       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:52:51.399273       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:52:51.399691       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:52:51.399707       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:52:51.401156       1 config.go:200] "Starting service config controller"
	I1115 10:52:51.401166       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:52:51.401182       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:52:51.401186       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:52:51.401208       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:52:51.401212       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:52:51.405646       1 config.go:309] "Starting node config controller"
	I1115 10:52:51.405679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:52:51.405687       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:52:51.502235       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:52:51.502271       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:52:51.502321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3460218d601a408c63f0ca5447c707456f5f810e7087fe7d37e58f8fc647abde] <==
	E1115 10:54:58.090815       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 8710afa6-4666-4dcd-a332-94b9d399b6ea(kube-system/kindnet-8vpd2) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8vpd2"
	E1115 10:54:58.090838       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8vpd2\": pod kindnet-8vpd2 is already assigned to node \"ha-439113-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8vpd2"
	I1115 10:54:58.091932       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8vpd2" node="ha-439113-m03"
	E1115 10:54:58.092605       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdlw\": pod kube-proxy-9qdlw is already assigned to node \"ha-439113-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9qdlw" node="ha-439113-m03"
	E1115 10:54:58.092714       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 6ae3b63e-9e94-4ba4-bf3d-2327ace904b9(kube-system/kube-proxy-9qdlw) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-9qdlw"
	E1115 10:54:58.092772       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdlw\": pod kube-proxy-9qdlw is already assigned to node \"ha-439113-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-9qdlw"
	I1115 10:54:58.094597       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9qdlw" node="ha-439113-m03"
	I1115 10:55:49.850353       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="fcf06a02-6f97-4f03-972d-b514907c4bad" pod="default/busybox-7b57f96db7-b2f5h" assumedNode="ha-439113-m02" currentNode="ha-439113-m03"
	E1115 10:55:49.899106       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-b2f5h\": pod busybox-7b57f96db7-b2f5h is already assigned to node \"ha-439113-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-b2f5h" node="ha-439113-m03"
	E1115 10:55:49.899164       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod fcf06a02-6f97-4f03-972d-b514907c4bad(default/busybox-7b57f96db7-b2f5h) was assumed on ha-439113-m03 but assigned to ha-439113-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-b2f5h"
	E1115 10:55:49.899186       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-b2f5h\": pod busybox-7b57f96db7-b2f5h is already assigned to node \"ha-439113-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-b2f5h"
	E1115 10:55:49.899130       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5xw75\": pod busybox-7b57f96db7-5xw75 is already assigned to node \"ha-439113-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5xw75" node="ha-439113-m02"
	E1115 10:55:49.899335       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5xw75\": pod busybox-7b57f96db7-5xw75 is already assigned to node \"ha-439113-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5xw75"
	I1115 10:55:49.900202       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-b2f5h" node="ha-439113-m02"
	I1115 10:55:49.900927       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5xw75" node="ha-439113-m02"
	E1115 10:55:49.971135       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vk6xz\": pod busybox-7b57f96db7-vk6xz is already assigned to node \"ha-439113-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-vk6xz" node="ha-439113-m03"
	E1115 10:55:49.971376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vk6xz\": pod busybox-7b57f96db7-vk6xz is already assigned to node \"ha-439113-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-vk6xz"
	E1115 10:55:50.011710       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-pvdw4\": pod busybox-7b57f96db7-pvdw4 is already assigned to node \"ha-439113\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-pvdw4" node="ha-439113"
	E1115 10:55:50.013170       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d6954577-fecf-4f6c-adb6-15227667c812(default/busybox-7b57f96db7-pvdw4) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-pvdw4"
	E1115 10:55:50.013287       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-pvdw4\": pod busybox-7b57f96db7-pvdw4 is already assigned to node \"ha-439113\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-pvdw4"
	I1115 10:55:50.014554       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-pvdw4" node="ha-439113"
	E1115 10:55:51.333063       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vddcm\": pod busybox-7b57f96db7-vddcm is already assigned to node \"ha-439113\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-vddcm" node="ha-439113"
	E1115 10:55:51.333129       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 92adc10b-e910-45d1-8267-ee2e884d0dcc(default/busybox-7b57f96db7-vddcm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-vddcm"
	E1115 10:55:51.333149       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vddcm\": pod busybox-7b57f96db7-vddcm is already assigned to node \"ha-439113\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-vddcm"
	I1115 10:55:51.334731       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-vddcm" node="ha-439113"
	
	
	==> kubelet <==
	Nov 15 10:52:50 ha-439113 kubelet[1353]: E1115 10:52:50.097170    1353 projected.go:196] Error preparing data for projected volume kube-api-access-7whdk for pod kube-system/kindnet-q4kpj: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:52:50 ha-439113 kubelet[1353]: E1115 10:52:50.097261    1353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5da9cefc-49b3-4bc2-8cb6-db44ed04b358-kube-api-access-7whdk podName:5da9cefc-49b3-4bc2-8cb6-db44ed04b358 nodeName:}" failed. No retries permitted until 2025-11-15 10:52:50.597235726 +0000 UTC m=+7.072615896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7whdk" (UniqueName: "kubernetes.io/projected/5da9cefc-49b3-4bc2-8cb6-db44ed04b358-kube-api-access-7whdk") pod "kindnet-q4kpj" (UID: "5da9cefc-49b3-4bc2-8cb6-db44ed04b358") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:52:50 ha-439113 kubelet[1353]: I1115 10:52:50.615777    1353 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:52:51 ha-439113 kubelet[1353]: I1115 10:52:51.867690    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k7bcn" podStartSLOduration=3.8676628490000002 podStartE2EDuration="3.867662849s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:52:51.821689898 +0000 UTC m=+8.297070076" watchObservedRunningTime="2025-11-15 10:52:51.867662849 +0000 UTC m=+8.343043027"
	Nov 15 10:52:53 ha-439113 kubelet[1353]: I1115 10:52:53.678682    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-q4kpj" podStartSLOduration=5.678665184 podStartE2EDuration="5.678665184s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:52:51.875525758 +0000 UTC m=+8.350905936" watchObservedRunningTime="2025-11-15 10:52:53.678665184 +0000 UTC m=+10.154045354"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.614360    1353 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.730975    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9460f377-28d8-418c-9dab-9428dfbfca1d-config-volume\") pod \"coredns-66bc5c9577-4g6sm\" (UID: \"9460f377-28d8-418c-9dab-9428dfbfca1d\") " pod="kube-system/coredns-66bc5c9577-4g6sm"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.731041    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xlh\" (UniqueName: \"kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh\") pod \"coredns-66bc5c9577-4g6sm\" (UID: \"9460f377-28d8-418c-9dab-9428dfbfca1d\") " pod="kube-system/coredns-66bc5c9577-4g6sm"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832138    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6a63ca66-7de2-40d8-96f0-a99da4ba3411-tmp\") pod \"storage-provisioner\" (UID: \"6a63ca66-7de2-40d8-96f0-a99da4ba3411\") " pod="kube-system/storage-provisioner"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832371    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5j8\" (UniqueName: \"kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8\") pod \"storage-provisioner\" (UID: \"6a63ca66-7de2-40d8-96f0-a99da4ba3411\") " pod="kube-system/storage-provisioner"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832501    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whw9c\" (UniqueName: \"kubernetes.io/projected/d28d9bc0-5e46-4c01-8b62-aa0ef429d935-kube-api-access-whw9c\") pod \"coredns-66bc5c9577-mlm6m\" (UID: \"d28d9bc0-5e46-4c01-8b62-aa0ef429d935\") " pod="kube-system/coredns-66bc5c9577-mlm6m"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832592    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d28d9bc0-5e46-4c01-8b62-aa0ef429d935-config-volume\") pod \"coredns-66bc5c9577-mlm6m\" (UID: \"d28d9bc0-5e46-4c01-8b62-aa0ef429d935\") " pod="kube-system/coredns-66bc5c9577-mlm6m"
	Nov 15 10:53:32 ha-439113 kubelet[1353]: W1115 10:53:32.013379    1353 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-14afa271db53e61f2103fdadfb4f751dd350cbe116c6e2a8db9c7e7f10867d2f WatchSource:0}: Error finding container 14afa271db53e61f2103fdadfb4f751dd350cbe116c6e2a8db9c7e7f10867d2f: Status 404 returned error can't find the container with id 14afa271db53e61f2103fdadfb4f751dd350cbe116c6e2a8db9c7e7f10867d2f
	Nov 15 10:53:32 ha-439113 kubelet[1353]: W1115 10:53:32.080700    1353 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-220741ce57653bd04b151a021265cc7a8a5489293e3386014b55a8cac8ec57a2 WatchSource:0}: Error finding container 220741ce57653bd04b151a021265cc7a8a5489293e3386014b55a8cac8ec57a2: Status 404 returned error can't find the container with id 220741ce57653bd04b151a021265cc7a8a5489293e3386014b55a8cac8ec57a2
	Nov 15 10:53:32 ha-439113 kubelet[1353]: I1115 10:53:32.924093    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mlm6m" podStartSLOduration=44.924072498 podStartE2EDuration="44.924072498s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:53:32.920760903 +0000 UTC m=+49.396141098" watchObservedRunningTime="2025-11-15 10:53:32.924072498 +0000 UTC m=+49.399452668"
	Nov 15 10:53:32 ha-439113 kubelet[1353]: I1115 10:53:32.925249    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.925231822 podStartE2EDuration="43.925231822s" podCreationTimestamp="2025-11-15 10:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:53:32.902380975 +0000 UTC m=+49.377761202" watchObservedRunningTime="2025-11-15 10:53:32.925231822 +0000 UTC m=+49.400612009"
	Nov 15 10:53:33 ha-439113 kubelet[1353]: I1115 10:53:33.067135    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4g6sm" podStartSLOduration=45.067113471 podStartE2EDuration="45.067113471s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:53:32.95642295 +0000 UTC m=+49.431803169" watchObservedRunningTime="2025-11-15 10:53:33.067113471 +0000 UTC m=+49.542493640"
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.079046    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdm4s\" (UniqueName: \"kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s\") pod \"busybox-7b57f96db7-pvdw4\" (UID: \"d6954577-fecf-4f6c-adb6-15227667c812\") " pod="default/busybox-7b57f96db7-pvdw4"
	Nov 15 10:55:50 ha-439113 kubelet[1353]: E1115 10:55:50.231385    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-vdm4s], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-7b57f96db7-pvdw4" podUID="d6954577-fecf-4f6c-adb6-15227667c812"
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.384749    1353 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdm4s\" (UniqueName: \"kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s\") pod \"d6954577-fecf-4f6c-adb6-15227667c812\" (UID: \"d6954577-fecf-4f6c-adb6-15227667c812\") "
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.389992    1353 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s" (OuterVolumeSpecName: "kube-api-access-vdm4s") pod "d6954577-fecf-4f6c-adb6-15227667c812" (UID: "d6954577-fecf-4f6c-adb6-15227667c812"). InnerVolumeSpecName "kube-api-access-vdm4s". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.485933    1353 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vdm4s\" (UniqueName: \"kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s\") on node \"ha-439113\" DevicePath \"\""
	Nov 15 10:55:51 ha-439113 kubelet[1353]: I1115 10:55:51.396048    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ghqb\" (UniqueName: \"kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb\") pod \"busybox-7b57f96db7-vddcm\" (UID: \"92adc10b-e910-45d1-8267-ee2e884d0dcc\") " pod="default/busybox-7b57f96db7-vddcm"
	Nov 15 10:55:51 ha-439113 kubelet[1353]: I1115 10:55:51.651287    1353 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6954577-fecf-4f6c-adb6-15227667c812" path="/var/lib/kubelet/pods/d6954577-fecf-4f6c-adb6-15227667c812/volumes"
	Nov 15 10:55:51 ha-439113 kubelet[1353]: W1115 10:55:51.666962    1353 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 WatchSource:0}: Error finding container 9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4: Status 404 returned error can't find the container with id 9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-439113 -n ha-439113
helpers_test.go:269: (dbg) Run:  kubectl --context ha-439113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (521.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.012452486s)
ha_test.go:309: expected profile "ha-439113" in json of 'profile list' to have "HAppy" status but have "Degraded" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-439113\",\"Status\":\"Degraded\",\"Config\":{\"Name\":\"ha-439113\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-439113\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-devi
ce-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":
false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-439113
helpers_test.go:243: (dbg) docker inspect ha-439113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	        "Created": "2025-11-15T10:52:17.169946413Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 616217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:52:17.244124933Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hosts",
	        "LogPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc-json.log",
	        "Name": "/ha-439113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-439113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-439113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	                "LowerDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-439113",
	                "Source": "/var/lib/docker/volumes/ha-439113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-439113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-439113",
	                "name.minikube.sigs.k8s.io": "ha-439113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b8649b658807d1e28bfc43925c48d4d32daddec11cb9f766be693df9a73c857",
	            "SandboxKey": "/var/run/docker/netns/4b8649b65880",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-439113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:6e:3e:a3:f6:71",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70b4341e58399e11a79033573f4328a7d843f08aeced339b6115cf0c5d327007",
	                    "EndpointID": "0a4055c126d7ee276ccb0bdcb15555844a98e2e6d37a65e167b535cc8f74d59b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-439113",
	                        "d546a4fc19d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-439113 -n ha-439113
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 logs -n 25: (1.595697926s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m03_ha-439113.txt                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113.txt                                                 │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m03_ha-439113-m02.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m02 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m02.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m04:/home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp testdata/cp-test.txt ha-439113-m04:/home/docker/cp-test.txt                                                             │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m04.txt │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m04_ha-439113.txt                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113.txt                                                 │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m02 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ node    │ ha-439113 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:58 UTC │
	│ node    │ ha-439113 node start m02 --alsologtostderr -v 5                                                                                      │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:52:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:52:11.684114  615834 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:52:11.684311  615834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:52:11.684321  615834 out.go:374] Setting ErrFile to fd 2...
	I1115 10:52:11.684332  615834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:52:11.684635  615834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:52:11.685086  615834 out.go:368] Setting JSON to false
	I1115 10:52:11.686005  615834 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9283,"bootTime":1763194649,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:52:11.686077  615834 start.go:143] virtualization:  
	I1115 10:52:11.690439  615834 out.go:179] * [ha-439113] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:52:11.695356  615834 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:52:11.695439  615834 notify.go:221] Checking for updates...
	I1115 10:52:11.702671  615834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:52:11.706147  615834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:52:11.709608  615834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:52:11.712812  615834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:52:11.716100  615834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:52:11.719602  615834 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:52:11.738907  615834 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:52:11.739038  615834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:52:11.803656  615834 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 10:52:11.794481335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:52:11.803768  615834 docker.go:319] overlay module found
	I1115 10:52:11.809139  615834 out.go:179] * Using the docker driver based on user configuration
	I1115 10:52:11.812090  615834 start.go:309] selected driver: docker
	I1115 10:52:11.812109  615834 start.go:930] validating driver "docker" against <nil>
	I1115 10:52:11.812123  615834 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:52:11.812965  615834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:52:11.867553  615834 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 10:52:11.858068036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:52:11.867723  615834 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:52:11.867964  615834 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:52:11.870990  615834 out.go:179] * Using Docker driver with root privileges
	I1115 10:52:11.873936  615834 cni.go:84] Creating CNI manager for ""
	I1115 10:52:11.874009  615834 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1115 10:52:11.874022  615834 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:52:11.874110  615834 start.go:353] cluster config:
	{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1115 10:52:11.877211  615834 out.go:179] * Starting "ha-439113" primary control-plane node in "ha-439113" cluster
	I1115 10:52:11.880108  615834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:52:11.883156  615834 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:52:11.885997  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:11.886049  615834 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:52:11.886066  615834 cache.go:65] Caching tarball of preloaded images
	I1115 10:52:11.886082  615834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:52:11.886149  615834 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:52:11.886160  615834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:52:11.886506  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:11.886537  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json: {Name:mk503d89be400de3662f84cf87d45d7e7cbd7d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:11.906117  615834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:52:11.906142  615834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:52:11.906161  615834 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:52:11.906185  615834 start.go:360] acquireMachinesLock for ha-439113: {Name:mk8f5fddf42cbee62c5cd775824daee5e174c730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:52:11.906292  615834 start.go:364] duration metric: took 86.18µs to acquireMachinesLock for "ha-439113"
	I1115 10:52:11.906323  615834 start.go:93] Provisioning new machine with config: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:52:11.906401  615834 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:52:11.909900  615834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:52:11.910149  615834 start.go:159] libmachine.API.Create for "ha-439113" (driver="docker")
	I1115 10:52:11.910196  615834 client.go:173] LocalClient.Create starting
	I1115 10:52:11.910286  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:52:11.910325  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:11.910347  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:11.910403  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:52:11.910431  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:11.910445  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:11.910811  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:52:11.926803  615834 cli_runner.go:211] docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:52:11.926899  615834 network_create.go:284] running [docker network inspect ha-439113] to gather additional debugging logs...
	I1115 10:52:11.926919  615834 cli_runner.go:164] Run: docker network inspect ha-439113
	W1115 10:52:11.942752  615834 cli_runner.go:211] docker network inspect ha-439113 returned with exit code 1
	I1115 10:52:11.942781  615834 network_create.go:287] error running [docker network inspect ha-439113]: docker network inspect ha-439113: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-439113 not found
	I1115 10:52:11.942795  615834 network_create.go:289] output of [docker network inspect ha-439113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-439113 not found
	
	** /stderr **
	I1115 10:52:11.942897  615834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:52:11.959384  615834 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018caf60}
	I1115 10:52:11.959435  615834 network_create.go:124] attempt to create docker network ha-439113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 10:52:11.959497  615834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-439113 ha-439113
	I1115 10:52:12.027139  615834 network_create.go:108] docker network ha-439113 192.168.49.0/24 created
	I1115 10:52:12.027175  615834 kic.go:121] calculated static IP "192.168.49.2" for the "ha-439113" container
	I1115 10:52:12.027259  615834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:52:12.044026  615834 cli_runner.go:164] Run: docker volume create ha-439113 --label name.minikube.sigs.k8s.io=ha-439113 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:52:12.062229  615834 oci.go:103] Successfully created a docker volume ha-439113
	I1115 10:52:12.062343  615834 cli_runner.go:164] Run: docker run --rm --name ha-439113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113 --entrypoint /usr/bin/test -v ha-439113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:52:12.627885  615834 oci.go:107] Successfully prepared a docker volume ha-439113
	I1115 10:52:12.627981  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:12.627997  615834 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:52:12.628073  615834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:52:17.096843  615834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.468725781s)
	I1115 10:52:17.096922  615834 kic.go:203] duration metric: took 4.468921057s to extract preloaded images to volume ...
	W1115 10:52:17.097066  615834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:52:17.097180  615834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:52:17.154778  615834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-439113 --name ha-439113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-439113 --network ha-439113 --ip 192.168.49.2 --volume ha-439113:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:52:17.461306  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Running}}
	I1115 10:52:17.480158  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:17.506964  615834 cli_runner.go:164] Run: docker exec ha-439113 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:52:17.561103  615834 oci.go:144] the created container "ha-439113" has a running status.
	I1115 10:52:17.561143  615834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa...
	I1115 10:52:17.707967  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1115 10:52:17.708016  615834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:52:17.736735  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:17.766109  615834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:52:17.766130  615834 kic_runner.go:114] Args: [docker exec --privileged ha-439113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:52:17.827825  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:17.862302  615834 machine.go:94] provisionDockerMachine start ...
	I1115 10:52:17.862429  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:17.886994  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:17.887345  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:17.887355  615834 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:52:17.888177  615834 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57300->127.0.0.1:33524: read: connection reset by peer
	I1115 10:52:21.040625  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 10:52:21.040656  615834 ubuntu.go:182] provisioning hostname "ha-439113"
	I1115 10:52:21.040728  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.057994  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:21.058308  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:21.058324  615834 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113 && echo "ha-439113" | sudo tee /etc/hostname
	I1115 10:52:21.218262  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 10:52:21.218365  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.236459  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:21.236769  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:21.236791  615834 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:52:21.389265  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:52:21.389335  615834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:52:21.389362  615834 ubuntu.go:190] setting up certificates
	I1115 10:52:21.389388  615834 provision.go:84] configureAuth start
	I1115 10:52:21.389458  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 10:52:21.407404  615834 provision.go:143] copyHostCerts
	I1115 10:52:21.407451  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:21.407485  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:52:21.407498  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:21.407598  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:52:21.407696  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:21.407722  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:52:21.407732  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:21.407760  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:52:21.407821  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:21.407848  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:52:21.407856  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:21.407881  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:52:21.407942  615834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113 san=[127.0.0.1 192.168.49.2 ha-439113 localhost minikube]
	I1115 10:52:21.601128  615834 provision.go:177] copyRemoteCerts
	I1115 10:52:21.601196  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:52:21.601243  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.618059  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:21.720640  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 10:52:21.720702  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:52:21.738499  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 10:52:21.738563  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1115 10:52:21.756334  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 10:52:21.756411  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:52:21.773802  615834 provision.go:87] duration metric: took 384.385626ms to configureAuth
	I1115 10:52:21.773827  615834 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:52:21.774007  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:21.774108  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:21.792181  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:21.792488  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33524 <nil> <nil>}
	I1115 10:52:21.792505  615834 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:52:22.055487  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:52:22.055508  615834 machine.go:97] duration metric: took 4.19318673s to provisionDockerMachine
	I1115 10:52:22.055518  615834 client.go:176] duration metric: took 10.145311721s to LocalClient.Create
	I1115 10:52:22.055558  615834 start.go:167] duration metric: took 10.145409413s to libmachine.API.Create "ha-439113"
	I1115 10:52:22.055565  615834 start.go:293] postStartSetup for "ha-439113" (driver="docker")
	I1115 10:52:22.055575  615834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:52:22.055642  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:52:22.055701  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.074873  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.180822  615834 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:52:22.184110  615834 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:52:22.184181  615834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:52:22.184200  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:52:22.184271  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:52:22.184357  615834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:52:22.184373  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 10:52:22.184487  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:52:22.192120  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:52:22.209855  615834 start.go:296] duration metric: took 154.275573ms for postStartSetup
	I1115 10:52:22.210297  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 10:52:22.229709  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:22.229990  615834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:52:22.230031  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.246845  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.349690  615834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:52:22.354322  615834 start.go:128] duration metric: took 10.447903635s to createHost
	I1115 10:52:22.354345  615834 start.go:83] releasing machines lock for "ha-439113", held for 10.448038496s
	I1115 10:52:22.354414  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 10:52:22.370646  615834 ssh_runner.go:195] Run: cat /version.json
	I1115 10:52:22.370699  615834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:52:22.370785  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.370703  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:22.391820  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.401038  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:22.492706  615834 ssh_runner.go:195] Run: systemctl --version
	I1115 10:52:22.586324  615834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:52:22.621059  615834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:52:22.625588  615834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:52:22.625696  615834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:52:22.653803  615834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:52:22.653872  615834 start.go:496] detecting cgroup driver to use...
	I1115 10:52:22.653923  615834 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:52:22.654000  615834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:52:22.671598  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:52:22.684101  615834 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:52:22.684164  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:52:22.701953  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:52:22.720477  615834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:52:22.839197  615834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:52:22.973776  615834 docker.go:234] disabling docker service ...
	I1115 10:52:22.973890  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:52:22.996835  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:52:23.014128  615834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:52:23.134231  615834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:52:23.267304  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:52:23.279966  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:52:23.293982  615834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:52:23.294052  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.303416  615834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:52:23.303487  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.312786  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.321901  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.330667  615834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:52:23.339021  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.347575  615834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.361325  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:52:23.370249  615834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:52:23.377894  615834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:52:23.385134  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:52:23.496671  615834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:52:23.627621  615834 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:52:23.627747  615834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:52:23.632590  615834 start.go:564] Will wait 60s for crictl version
	I1115 10:52:23.632707  615834 ssh_runner.go:195] Run: which crictl
	I1115 10:52:23.636316  615834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:52:23.660657  615834 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:52:23.660772  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:52:23.688588  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:52:23.724523  615834 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:52:23.727329  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:52:23.742793  615834 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:52:23.746661  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:52:23.756777  615834 kubeadm.go:884] updating cluster {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:52:23.756916  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:23.756985  615834 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:52:23.791518  615834 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:52:23.791553  615834 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:52:23.791608  615834 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:52:23.816324  615834 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:52:23.816345  615834 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:52:23.816352  615834 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 10:52:23.816457  615834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:52:23.816543  615834 ssh_runner.go:195] Run: crio config
	I1115 10:52:23.871271  615834 cni.go:84] Creating CNI manager for ""
	I1115 10:52:23.871296  615834 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1115 10:52:23.871344  615834 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:52:23.871375  615834 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-439113 NodeName:ha-439113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:52:23.871518  615834 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-439113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:52:23.871550  615834 kube-vip.go:115] generating kube-vip config ...
	I1115 10:52:23.871606  615834 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 10:52:23.883474  615834 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:52:23.883590  615834 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1115 10:52:23.883672  615834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:52:23.891368  615834 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:52:23.891438  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 10:52:23.899042  615834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 10:52:23.911909  615834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:52:23.924778  615834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1115 10:52:23.937611  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1115 10:52:23.950683  615834 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 10:52:23.954252  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:52:23.964098  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:52:24.090640  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:52:24.107612  615834 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.2
	I1115 10:52:24.107684  615834 certs.go:195] generating shared ca certs ...
	I1115 10:52:24.107716  615834 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.107920  615834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:52:24.108024  615834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:52:24.108053  615834 certs.go:257] generating profile certs ...
	I1115 10:52:24.108166  615834 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 10:52:24.108201  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt with IP's: []
	I1115 10:52:24.554437  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt ...
	I1115 10:52:24.554475  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt: {Name:mk438c91bbfdc71ed98bf83a35686eb336e160af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.554716  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key ...
	I1115 10:52:24.554744  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key: {Name:mk02e6816386c2f23446825dc7817e68bb37681f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.554852  615834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e
	I1115 10:52:24.554871  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1115 10:52:24.846690  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e ...
	I1115 10:52:24.846719  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e: {Name:mk6e8b02c721d9233c644f83207024f5d8ec47b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.846896  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e ...
	I1115 10:52:24.846911  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e: {Name:mk550e3639d934c5207f115051431648085f918a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:24.846992  615834 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.0606794e -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 10:52:24.847070  615834 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.0606794e -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 10:52:24.847142  615834 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 10:52:24.847158  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt with IP's: []
	I1115 10:52:25.108305  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt ...
	I1115 10:52:25.108333  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt: {Name:mke961fbe90f89a22239bb6958edf2896c46d23c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:25.108521  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key ...
	I1115 10:52:25.108534  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key: {Name:mka3b2de22e0defa33f1fbe91a5aef4867a64317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:25.108626  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 10:52:25.108646  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 10:52:25.108659  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 10:52:25.108675  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 10:52:25.108688  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 10:52:25.108704  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 10:52:25.108742  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 10:52:25.108765  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 10:52:25.108819  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:52:25.108874  615834 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:52:25.108885  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:52:25.108911  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:52:25.108936  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:52:25.108963  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:52:25.109009  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:52:25.109039  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.109061  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.109072  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.109631  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:52:25.130067  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:52:25.148156  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:52:25.167194  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:52:25.185554  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:52:25.204053  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:52:25.222157  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:52:25.241325  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:52:25.258938  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:52:25.276403  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:52:25.294188  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:52:25.312518  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:52:25.325426  615834 ssh_runner.go:195] Run: openssl version
	I1115 10:52:25.331663  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:52:25.340063  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.343617  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.343733  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:52:25.384318  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:52:25.392577  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:52:25.400710  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.404422  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.404488  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:52:25.445776  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:52:25.454084  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:52:25.462420  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.466760  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.466825  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:52:25.507739  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:52:25.516688  615834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:52:25.520465  615834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:52:25.520553  615834 kubeadm.go:401] StartCluster: {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:52:25.520641  615834 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:52:25.520715  615834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:52:25.547927  615834 cri.go:89] found id: ""
	I1115 10:52:25.548044  615834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:52:25.555966  615834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:52:25.563758  615834 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:52:25.563877  615834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:52:25.571839  615834 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:52:25.571860  615834 kubeadm.go:158] found existing configuration files:
	
	I1115 10:52:25.571936  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:52:25.579638  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:52:25.579754  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:52:25.587053  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:52:25.594888  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:52:25.594978  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:52:25.602693  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:52:25.610312  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:52:25.610393  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:52:25.617971  615834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:52:25.625886  615834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:52:25.625983  615834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:52:25.633574  615834 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:52:25.677252  615834 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:52:25.677698  615834 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:52:25.708412  615834 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:52:25.708566  615834 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:52:25.708672  615834 kubeadm.go:319] OS: Linux
	I1115 10:52:25.708752  615834 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:52:25.708819  615834 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:52:25.708896  615834 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:52:25.708957  615834 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:52:25.709016  615834 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:52:25.709075  615834 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:52:25.709130  615834 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:52:25.709188  615834 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:52:25.709245  615834 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:52:25.779557  615834 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:52:25.779756  615834 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:52:25.779911  615834 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:52:25.789262  615834 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:52:25.795756  615834 out.go:252]   - Generating certificates and keys ...
	I1115 10:52:25.795921  615834 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:52:25.796023  615834 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:52:26.163701  615834 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:52:26.496396  615834 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:52:27.022598  615834 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:52:27.803078  615834 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:52:28.032504  615834 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:52:28.032888  615834 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [ha-439113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:52:28.137411  615834 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:52:28.137819  615834 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [ha-439113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 10:52:29.114848  615834 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:52:29.664327  615834 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:52:29.906078  615834 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:52:29.906403  615834 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:52:32.408567  615834 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:52:32.642398  615834 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:52:33.243645  615834 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:52:33.594554  615834 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:52:33.707496  615834 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:52:33.708305  615834 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:52:33.711007  615834 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:52:33.714419  615834 out.go:252]   - Booting up control plane ...
	I1115 10:52:33.714532  615834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:52:33.714622  615834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:52:33.714697  615834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:52:33.731207  615834 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:52:33.731325  615834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:52:33.738515  615834 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:52:33.738836  615834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:52:33.739039  615834 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:52:33.869273  615834 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:52:33.869411  615834 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:52:35.371019  615834 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501707463s
	I1115 10:52:35.374685  615834 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:52:35.374789  615834 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 10:52:35.374886  615834 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:52:35.374972  615834 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:52:39.892256  615834 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.514709618s
	I1115 10:52:40.863354  615834 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.488586302s
	I1115 10:52:42.879079  615834 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.504323654s
	I1115 10:52:42.898852  615834 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:52:42.914349  615834 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:52:42.936963  615834 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:52:42.937182  615834 kubeadm.go:319] [mark-control-plane] Marking the node ha-439113 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:52:42.949865  615834 kubeadm.go:319] [bootstrap-token] Using token: cozhby.k5651djpc1zqxsaw
	I1115 10:52:42.952786  615834 out.go:252]   - Configuring RBAC rules ...
	I1115 10:52:42.952958  615834 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:52:42.957952  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:52:42.966656  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:52:42.971023  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:52:42.976953  615834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:52:42.981154  615834 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:52:43.286508  615834 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:52:43.753810  615834 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:52:44.286640  615834 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:52:44.288033  615834 kubeadm.go:319] 
	I1115 10:52:44.288118  615834 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:52:44.288124  615834 kubeadm.go:319] 
	I1115 10:52:44.288205  615834 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:52:44.288209  615834 kubeadm.go:319] 
	I1115 10:52:44.288236  615834 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:52:44.288725  615834 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:52:44.288792  615834 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:52:44.288800  615834 kubeadm.go:319] 
	I1115 10:52:44.288903  615834 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:52:44.288915  615834 kubeadm.go:319] 
	I1115 10:52:44.288965  615834 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:52:44.288973  615834 kubeadm.go:319] 
	I1115 10:52:44.289027  615834 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:52:44.289108  615834 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:52:44.289183  615834 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:52:44.289190  615834 kubeadm.go:319] 
	I1115 10:52:44.289597  615834 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:52:44.289692  615834 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:52:44.289698  615834 kubeadm.go:319] 
	I1115 10:52:44.289852  615834 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cozhby.k5651djpc1zqxsaw \
	I1115 10:52:44.289976  615834 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 10:52:44.290007  615834 kubeadm.go:319] 	--control-plane 
	I1115 10:52:44.290014  615834 kubeadm.go:319] 
	I1115 10:52:44.290104  615834 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:52:44.290113  615834 kubeadm.go:319] 
	I1115 10:52:44.290200  615834 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cozhby.k5651djpc1zqxsaw \
	I1115 10:52:44.290311  615834 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 10:52:44.294927  615834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:52:44.295159  615834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:52:44.295268  615834 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:52:44.295283  615834 cni.go:84] Creating CNI manager for ""
	I1115 10:52:44.295290  615834 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1115 10:52:44.298471  615834 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:52:44.301289  615834 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:52:44.305295  615834 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:52:44.305317  615834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:52:44.317787  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:52:44.607643  615834 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:52:44.607800  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:44.607802  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-439113 minikube.k8s.io/updated_at=2025_11_15T10_52_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=ha-439113 minikube.k8s.io/primary=true
	I1115 10:52:44.622812  615834 ops.go:34] apiserver oom_adj: -16
	I1115 10:52:44.745222  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:45.249009  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:45.746142  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:46.245879  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:46.746300  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:47.246287  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:47.745337  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:48.246229  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:48.745304  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:52:48.900701  615834 kubeadm.go:1114] duration metric: took 4.292964203s to wait for elevateKubeSystemPrivileges
	I1115 10:52:48.900725  615834 kubeadm.go:403] duration metric: took 23.380179963s to StartCluster
	I1115 10:52:48.900743  615834 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:48.900800  615834 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:52:48.901471  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:52:48.901687  615834 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:52:48.901714  615834 start.go:242] waiting for startup goroutines ...
	I1115 10:52:48.901721  615834 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:52:48.901780  615834 addons.go:70] Setting storage-provisioner=true in profile "ha-439113"
	I1115 10:52:48.901799  615834 addons.go:239] Setting addon storage-provisioner=true in "ha-439113"
	I1115 10:52:48.901823  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:52:48.902304  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:48.902464  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:52:48.902711  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:48.902750  615834 addons.go:70] Setting default-storageclass=true in profile "ha-439113"
	I1115 10:52:48.902767  615834 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "ha-439113"
	I1115 10:52:48.902994  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:48.930280  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:52:48.930805  615834 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:52:48.930827  615834 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:52:48.930834  615834 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:52:48.930839  615834 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:52:48.930844  615834 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:52:48.931179  615834 addons.go:239] Setting addon default-storageclass=true in "ha-439113"
	I1115 10:52:48.931211  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:52:48.931630  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:52:48.937022  615834 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 10:52:48.954692  615834 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:52:48.954718  615834 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:52:48.954790  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:48.958729  615834 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:52:48.961757  615834 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:52:48.961785  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:52:48.961851  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:52:48.991956  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:49.003019  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:52:49.126632  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:52:49.199038  615834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:52:49.200729  615834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:52:49.476949  615834 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 10:52:49.712750  615834 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:52:49.715645  615834 addons.go:515] duration metric: took 813.901181ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:52:49.715693  615834 start.go:247] waiting for cluster config update ...
	I1115 10:52:49.715707  615834 start.go:256] writing updated cluster config ...
	I1115 10:52:49.718865  615834 out.go:203] 
	I1115 10:52:49.721875  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:52:49.721967  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:49.725237  615834 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	I1115 10:52:49.728036  615834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:52:49.731102  615834 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:52:49.733925  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:49.733952  615834 cache.go:65] Caching tarball of preloaded images
	I1115 10:52:49.733991  615834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:52:49.734045  615834 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:52:49.734056  615834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:52:49.734161  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:52:49.753102  615834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:52:49.753125  615834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:52:49.753138  615834 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:52:49.753162  615834 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:52:49.753268  615834 start.go:364] duration metric: took 84.202µs to acquireMachinesLock for "ha-439113-m02"
	I1115 10:52:49.753299  615834 start.go:93] Provisioning new machine with config: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:52:49.753373  615834 start.go:125] createHost starting for "m02" (driver="docker")
	I1115 10:52:49.756892  615834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:52:49.757017  615834 start.go:159] libmachine.API.Create for "ha-439113" (driver="docker")
	I1115 10:52:49.757042  615834 client.go:173] LocalClient.Create starting
	I1115 10:52:49.757111  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:52:49.757147  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:49.757164  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:49.757217  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:52:49.757243  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:52:49.757253  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:52:49.757516  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:52:49.774864  615834 network_create.go:77] Found existing network {name:ha-439113 subnet:0x4001ca8120 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1115 10:52:49.774904  615834 kic.go:121] calculated static IP "192.168.49.3" for the "ha-439113-m02" container
	I1115 10:52:49.775009  615834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:52:49.814441  615834 cli_runner.go:164] Run: docker volume create ha-439113-m02 --label name.minikube.sigs.k8s.io=ha-439113-m02 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:52:49.834186  615834 oci.go:103] Successfully created a docker volume ha-439113-m02
	I1115 10:52:49.834270  615834 cli_runner.go:164] Run: docker run --rm --name ha-439113-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m02 --entrypoint /usr/bin/test -v ha-439113-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:52:50.436132  615834 oci.go:107] Successfully prepared a docker volume ha-439113-m02
	I1115 10:52:50.436184  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:52:50.436195  615834 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:52:50.436263  615834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:52:55.039651  615834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.603345462s)
	I1115 10:52:55.039694  615834 kic.go:203] duration metric: took 4.603495297s to extract preloaded images to volume ...
	W1115 10:52:55.039945  615834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:52:55.040125  615834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:52:55.106173  615834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-439113-m02 --name ha-439113-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-439113-m02 --network ha-439113 --ip 192.168.49.3 --volume ha-439113-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:52:55.420193  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Running}}
	I1115 10:52:55.449058  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:52:55.475245  615834 cli_runner.go:164] Run: docker exec ha-439113-m02 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:52:55.537303  615834 oci.go:144] the created container "ha-439113-m02" has a running status.
	I1115 10:52:55.537331  615834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa...
	I1115 10:52:55.935489  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1115 10:52:55.935608  615834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:52:55.957478  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:52:55.984754  615834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:52:55.984774  615834 kic_runner.go:114] Args: [docker exec --privileged ha-439113-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:52:56.029582  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:52:56.047598  615834 machine.go:94] provisionDockerMachine start ...
	I1115 10:52:56.047704  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:56.065542  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:56.065886  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:52:56.065906  615834 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:52:56.066546  615834 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:52:59.224486  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 10:52:59.224511  615834 ubuntu.go:182] provisioning hostname "ha-439113-m02"
	I1115 10:52:59.224600  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:59.242529  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:59.242842  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:52:59.242860  615834 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
	I1115 10:52:59.402452  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 10:52:59.402599  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:59.419963  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:52:59.420272  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:52:59.420289  615834 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:52:59.569155  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:52:59.569183  615834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:52:59.569199  615834 ubuntu.go:190] setting up certificates
	I1115 10:52:59.569216  615834 provision.go:84] configureAuth start
	I1115 10:52:59.569292  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:52:59.585645  615834 provision.go:143] copyHostCerts
	I1115 10:52:59.585694  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:59.585729  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:52:59.585740  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:52:59.585818  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:52:59.585962  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:59.586001  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:52:59.586010  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:52:59.586111  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:52:59.586179  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:59.586205  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:52:59.586214  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:52:59.586242  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:52:59.586299  615834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
	I1115 10:52:59.933236  615834 provision.go:177] copyRemoteCerts
	I1115 10:52:59.933311  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:52:59.933366  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:52:59.951313  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:00.081558  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 10:53:00.081698  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:53:00.144402  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 10:53:00.144483  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:53:00.220295  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 10:53:00.220405  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:53:00.292332  615834 provision.go:87] duration metric: took 723.096278ms to configureAuth
	I1115 10:53:00.292371  615834 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:53:00.292607  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:53:00.303924  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:00.364390  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:53:00.364745  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33529 <nil> <nil>}
	I1115 10:53:00.364768  615834 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:53:00.680385  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:53:00.680412  615834 machine.go:97] duration metric: took 4.632794086s to provisionDockerMachine
	I1115 10:53:00.680422  615834 client.go:176] duration metric: took 10.923374181s to LocalClient.Create
	I1115 10:53:00.680433  615834 start.go:167] duration metric: took 10.923416642s to libmachine.API.Create "ha-439113"
	I1115 10:53:00.680440  615834 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
	I1115 10:53:00.680450  615834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:53:00.680514  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:53:00.680559  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:00.701184  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:00.808970  615834 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:53:00.813043  615834 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:53:00.813076  615834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:53:00.813088  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:53:00.813159  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:53:00.813239  615834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:53:00.813249  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 10:53:00.813348  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:53:00.821411  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:53:00.841985  615834 start.go:296] duration metric: took 161.52912ms for postStartSetup
	I1115 10:53:00.842403  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:53:00.861175  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:53:00.861487  615834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:53:00.861634  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:00.879284  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:00.982093  615834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:53:00.986904  615834 start.go:128] duration metric: took 11.233516391s to createHost
	I1115 10:53:00.986931  615834 start.go:83] releasing machines lock for "ha-439113-m02", held for 11.233648971s
	I1115 10:53:00.987002  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 10:53:01.008778  615834 out.go:179] * Found network options:
	I1115 10:53:01.012367  615834 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 10:53:01.015379  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:53:01.015438  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 10:53:01.015521  615834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:53:01.015568  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:01.015913  615834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:53:01.015966  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 10:53:01.041833  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:01.042637  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 10:53:01.242849  615834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:53:01.247673  615834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:53:01.247794  615834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:53:01.278117  615834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:53:01.278197  615834 start.go:496] detecting cgroup driver to use...
	I1115 10:53:01.278246  615834 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:53:01.278325  615834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:53:01.296968  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:53:01.310621  615834 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:53:01.310739  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:53:01.328674  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:53:01.348385  615834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:53:01.482770  615834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:53:01.618324  615834 docker.go:234] disabling docker service ...
	I1115 10:53:01.618397  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:53:01.640724  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:53:01.655986  615834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:53:01.791958  615834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:53:01.922418  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:53:01.936296  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:53:01.950481  615834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:53:01.950545  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.959267  615834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:53:01.959380  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.968531  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.977563  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:01.986850  615834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:53:01.995480  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:02.004423  615834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:02.020832  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:53:02.032163  615834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:53:02.041833  615834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:53:02.054019  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:53:02.173378  615834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:53:02.316319  615834 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:53:02.316436  615834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:53:02.320549  615834 start.go:564] Will wait 60s for crictl version
	I1115 10:53:02.320660  615834 ssh_runner.go:195] Run: which crictl
	I1115 10:53:02.324799  615834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:53:02.354893  615834 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:53:02.355043  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:53:02.385812  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:53:02.417340  615834 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:53:02.420195  615834 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 10:53:02.422960  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:53:02.441143  615834 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:53:02.445342  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:53:02.455459  615834 mustload.go:66] Loading cluster: ha-439113
	I1115 10:53:02.455671  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:53:02.455920  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:53:02.473394  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:53:02.473671  615834 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
	I1115 10:53:02.473690  615834 certs.go:195] generating shared ca certs ...
	I1115 10:53:02.473706  615834 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:53:02.473835  615834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:53:02.473881  615834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:53:02.473892  615834 certs.go:257] generating profile certs ...
	I1115 10:53:02.473967  615834 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 10:53:02.473999  615834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8
	I1115 10:53:02.474016  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1115 10:53:02.688847  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8 ...
	I1115 10:53:02.688884  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8: {Name:mkb1e34c4420c67bd5263ca2027113dec29d5023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:53:02.689081  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8 ...
	I1115 10:53:02.689099  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8: {Name:mk77433e62660c76c57a09a0de21042793ab4c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:53:02.689184  615834 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.29032bc8 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 10:53:02.689315  615834 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 10:53:02.689447  615834 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 10:53:02.689464  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 10:53:02.689480  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 10:53:02.689499  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 10:53:02.689515  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 10:53:02.689529  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 10:53:02.689540  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 10:53:02.689551  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 10:53:02.689561  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 10:53:02.689616  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:53:02.689647  615834 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:53:02.689659  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:53:02.689685  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:53:02.689709  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:53:02.689734  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:53:02.689777  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:53:02.689812  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:02.689830  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 10:53:02.689843  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 10:53:02.689900  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:53:02.707023  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:53:02.809240  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 10:53:02.812976  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 10:53:02.821441  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 10:53:02.825073  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 10:53:02.833376  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 10:53:02.836965  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 10:53:02.845363  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 10:53:02.849022  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 10:53:02.857893  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 10:53:02.861635  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 10:53:02.869981  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 10:53:02.873465  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 10:53:02.881678  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:53:02.900782  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:53:02.918838  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:53:02.936798  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:53:02.955025  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1115 10:53:02.974508  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:53:02.992409  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:53:03.015675  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:53:03.035409  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:53:03.054563  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:53:03.072550  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:53:03.090566  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 10:53:03.104801  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 10:53:03.117939  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 10:53:03.130822  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 10:53:03.143834  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 10:53:03.156727  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 10:53:03.170529  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 10:53:03.183731  615834 ssh_runner.go:195] Run: openssl version
	I1115 10:53:03.190745  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:53:03.200536  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:53:03.204392  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:53:03.204457  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:53:03.245496  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:53:03.253893  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:53:03.262357  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:03.266439  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:03.266529  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:53:03.307492  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:53:03.316145  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:53:03.324975  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:53:03.328846  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:53:03.328991  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:53:03.370097  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:53:03.378718  615834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:53:03.382946  615834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:53:03.383037  615834 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 10:53:03.383130  615834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:53:03.383160  615834 kube-vip.go:115] generating kube-vip config ...
	I1115 10:53:03.383207  615834 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 10:53:03.395337  615834 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:53:03.395446  615834 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 10:53:03.395555  615834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:53:03.403843  615834 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:53:03.403920  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 10:53:03.411984  615834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 10:53:03.425530  615834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:53:03.440951  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 10:53:03.454339  615834 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 10:53:03.458265  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:53:03.469366  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:53:03.584964  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:53:03.603455  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:53:03.603770  615834 start.go:318] joinCluster: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:53:03.603895  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1115 10:53:03.603952  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:53:03.623443  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:53:03.804336  615834 start.go:344] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:53:03.804415  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0tbi8.pbuwwja7os5f0i73 --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I1115 10:53:26.054188  615834 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0tbi8.pbuwwja7os5f0i73 --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (22.249750371s)
	I1115 10:53:26.054265  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1115 10:53:26.438754  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-439113-m02 minikube.k8s.io/updated_at=2025_11_15T10_53_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=ha-439113 minikube.k8s.io/primary=false
	I1115 10:53:26.595421  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-439113-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1115 10:53:26.780779  615834 start.go:320] duration metric: took 23.177004016s to joinCluster
	I1115 10:53:26.780842  615834 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:53:26.781160  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:53:26.783929  615834 out.go:179] * Verifying Kubernetes components...
	I1115 10:53:26.786961  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:53:26.983446  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:53:26.998184  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 10:53:26.998257  615834 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 10:53:26.998526  615834 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
	W1115 10:53:29.002308  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:31.002655  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:33.011921  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:35.501884  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:38.003067  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:40.505715  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:43.002629  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:45.501909  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:47.502152  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:50.002051  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:52.002438  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:54.502175  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:56.504031  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:53:59.001885  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:01.002028  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:03.003943  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:05.502891  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:07.502959  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:10.002500  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:12.002873  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:14.502243  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:16.502532  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	W1115 10:54:19.002594  615834 node_ready.go:57] node "ha-439113-m02" has "Ready":"False" status (will retry)
	I1115 10:54:20.502277  615834 node_ready.go:49] node "ha-439113-m02" is "Ready"
	I1115 10:54:20.502316  615834 node_ready.go:38] duration metric: took 53.503771317s for node "ha-439113-m02" to be "Ready" ...
	I1115 10:54:20.502329  615834 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:54:20.502389  615834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:54:20.514872  615834 api_server.go:72] duration metric: took 53.733982457s to wait for apiserver process to appear ...
	I1115 10:54:20.514895  615834 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:54:20.514914  615834 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 10:54:20.524348  615834 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 10:54:20.528505  615834 api_server.go:141] control plane version: v1.34.1
	I1115 10:54:20.528579  615834 api_server.go:131] duration metric: took 13.676063ms to wait for apiserver health ...
	I1115 10:54:20.528621  615834 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:54:20.533546  615834 system_pods.go:59] 17 kube-system pods found
	I1115 10:54:20.533579  615834 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:54:20.533586  615834 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:54:20.533591  615834 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:54:20.533595  615834 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:54:20.533600  615834 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:54:20.533604  615834 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:54:20.533609  615834 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:54:20.533614  615834 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:54:20.533618  615834 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:54:20.533623  615834 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:54:20.533628  615834 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:54:20.533634  615834 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:54:20.533639  615834 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:54:20.533652  615834 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:54:20.533656  615834 system_pods.go:61] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:54:20.533660  615834 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:54:20.533668  615834 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:54:20.533674  615834 system_pods.go:74] duration metric: took 5.033609ms to wait for pod list to return data ...
	I1115 10:54:20.533684  615834 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:54:20.536782  615834 default_sa.go:45] found service account: "default"
	I1115 10:54:20.536808  615834 default_sa.go:55] duration metric: took 3.117861ms for default service account to be created ...
	I1115 10:54:20.536817  615834 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:54:20.540627  615834 system_pods.go:86] 17 kube-system pods found
	I1115 10:54:20.540658  615834 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:54:20.540664  615834 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:54:20.540669  615834 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:54:20.540673  615834 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:54:20.540679  615834 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:54:20.540683  615834 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:54:20.540687  615834 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:54:20.540691  615834 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:54:20.540697  615834 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:54:20.540701  615834 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:54:20.540736  615834 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:54:20.540747  615834 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:54:20.540751  615834 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:54:20.540755  615834 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:54:20.540759  615834 system_pods.go:89] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:54:20.540763  615834 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:54:20.540767  615834 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:54:20.540780  615834 system_pods.go:126] duration metric: took 3.95687ms to wait for k8s-apps to be running ...
	I1115 10:54:20.540788  615834 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:54:20.540843  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:54:20.564288  615834 system_svc.go:56] duration metric: took 23.490494ms WaitForService to wait for kubelet
	I1115 10:54:20.564316  615834 kubeadm.go:587] duration metric: took 53.783432535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:54:20.564335  615834 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:54:20.569448  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:54:20.569480  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:54:20.569492  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:54:20.569497  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:54:20.569504  615834 node_conditions.go:105] duration metric: took 5.163235ms to run NodePressure ...
	I1115 10:54:20.569515  615834 start.go:242] waiting for startup goroutines ...
	I1115 10:54:20.569540  615834 start.go:256] writing updated cluster config ...
	I1115 10:54:20.573029  615834 out.go:203] 
	I1115 10:54:20.576017  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:20.576141  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:54:20.579475  615834 out.go:179] * Starting "ha-439113-m03" control-plane node in "ha-439113" cluster
	I1115 10:54:20.582281  615834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:54:20.585209  615834 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:54:20.587846  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:54:20.587912  615834 cache.go:65] Caching tarball of preloaded images
	I1115 10:54:20.587882  615834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:54:20.588228  615834 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:54:20.588245  615834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:54:20.588459  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:54:20.612425  615834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:54:20.612449  615834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:54:20.612466  615834 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:54:20.612497  615834 start.go:360] acquireMachinesLock for ha-439113-m03: {Name:mka79aa6495619db3e64a5700d9ed838bd218f87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:54:20.612613  615834 start.go:364] duration metric: took 96.773µs to acquireMachinesLock for "ha-439113-m03"
	I1115 10:54:20.612643  615834 start.go:93] Provisioning new machine with config: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:54:20.612748  615834 start.go:125] createHost starting for "m03" (driver="docker")
	I1115 10:54:20.618177  615834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:54:20.618305  615834 start.go:159] libmachine.API.Create for "ha-439113" (driver="docker")
	I1115 10:54:20.618339  615834 client.go:173] LocalClient.Create starting
	I1115 10:54:20.618426  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 10:54:20.618465  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:54:20.618483  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:54:20.618539  615834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 10:54:20.618560  615834 main.go:143] libmachine: Decoding PEM data...
	I1115 10:54:20.618570  615834 main.go:143] libmachine: Parsing certificate...
	I1115 10:54:20.618824  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:54:20.638778  615834 network_create.go:77] Found existing network {name:ha-439113 subnet:0x4001d2e690 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1115 10:54:20.638818  615834 kic.go:121] calculated static IP "192.168.49.4" for the "ha-439113-m03" container
	I1115 10:54:20.638904  615834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:54:20.657632  615834 cli_runner.go:164] Run: docker volume create ha-439113-m03 --label name.minikube.sigs.k8s.io=ha-439113-m03 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:54:20.675738  615834 oci.go:103] Successfully created a docker volume ha-439113-m03
	I1115 10:54:20.675835  615834 cli_runner.go:164] Run: docker run --rm --name ha-439113-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m03 --entrypoint /usr/bin/test -v ha-439113-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:54:21.209664  615834 oci.go:107] Successfully prepared a docker volume ha-439113-m03
	I1115 10:54:21.209729  615834 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:54:21.209742  615834 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:54:21.209821  615834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:54:25.642090  615834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-439113-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.432221799s)
	I1115 10:54:25.642125  615834 kic.go:203] duration metric: took 4.432378543s to extract preloaded images to volume ...
	W1115 10:54:25.642270  615834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:54:25.642387  615834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:54:25.703940  615834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-439113-m03 --name ha-439113-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-439113-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-439113-m03 --network ha-439113 --ip 192.168.49.4 --volume ha-439113-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:54:26.040112  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Running}}
	I1115 10:54:26.066450  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 10:54:26.092550  615834 cli_runner.go:164] Run: docker exec ha-439113-m03 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:54:26.151852  615834 oci.go:144] the created container "ha-439113-m03" has a running status.
	I1115 10:54:26.151878  615834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa...
	I1115 10:54:27.113374  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1115 10:54:27.113470  615834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:54:27.134901  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 10:54:27.152034  615834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:54:27.152059  615834 kic_runner.go:114] Args: [docker exec --privileged ha-439113-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:54:27.195662  615834 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 10:54:27.223784  615834 machine.go:94] provisionDockerMachine start ...
	I1115 10:54:27.223875  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:27.242041  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:27.242447  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:27.242463  615834 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:54:27.243142  615834 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:54:30.397276  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m03
	
	I1115 10:54:30.397299  615834 ubuntu.go:182] provisioning hostname "ha-439113-m03"
	I1115 10:54:30.397373  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:30.416594  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:30.417064  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:30.417081  615834 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m03 && echo "ha-439113-m03" | sudo tee /etc/hostname
	I1115 10:54:30.584566  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m03
	
	I1115 10:54:30.584689  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:30.605012  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:30.605315  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:30.605332  615834 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:54:30.765007  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:54:30.765033  615834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 10:54:30.765049  615834 ubuntu.go:190] setting up certificates
	I1115 10:54:30.765058  615834 provision.go:84] configureAuth start
	I1115 10:54:30.765121  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 10:54:30.786754  615834 provision.go:143] copyHostCerts
	I1115 10:54:30.786811  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:54:30.786846  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 10:54:30.786858  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 10:54:30.786950  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 10:54:30.787046  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:54:30.787077  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 10:54:30.787083  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 10:54:30.787114  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 10:54:30.787169  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:54:30.787194  615834 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 10:54:30.787201  615834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 10:54:30.787225  615834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 10:54:30.787298  615834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m03 san=[127.0.0.1 192.168.49.4 ha-439113-m03 localhost minikube]
	I1115 10:54:31.527679  615834 provision.go:177] copyRemoteCerts
	I1115 10:54:31.527756  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:54:31.527803  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:31.550626  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:31.657012  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 10:54:31.657081  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 10:54:31.677807  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 10:54:31.677871  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:54:31.700160  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 10:54:31.700222  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:54:31.721356  615834 provision.go:87] duration metric: took 956.283987ms to configureAuth
	I1115 10:54:31.721382  615834 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:54:31.721638  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:31.721743  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:31.746090  615834 main.go:143] libmachine: Using SSH client type: native
	I1115 10:54:31.746393  615834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I1115 10:54:31.746414  615834 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:54:32.073048  615834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:54:32.073072  615834 machine.go:97] duration metric: took 4.849269283s to provisionDockerMachine
	I1115 10:54:32.073081  615834 client.go:176] duration metric: took 11.454730895s to LocalClient.Create
	I1115 10:54:32.073100  615834 start.go:167] duration metric: took 11.454796102s to libmachine.API.Create "ha-439113"
	I1115 10:54:32.073106  615834 start.go:293] postStartSetup for "ha-439113-m03" (driver="docker")
	I1115 10:54:32.073128  615834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:54:32.073207  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:54:32.073254  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.094317  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.205944  615834 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:54:32.211106  615834 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:54:32.211131  615834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:54:32.211141  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 10:54:32.211196  615834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 10:54:32.211273  615834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 10:54:32.211280  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 10:54:32.211381  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:54:32.220211  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:54:32.239918  615834 start.go:296] duration metric: took 166.785032ms for postStartSetup
	I1115 10:54:32.240282  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 10:54:32.257694  615834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 10:54:32.257993  615834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:54:32.258046  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.284964  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.386885  615834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:54:32.392199  615834 start.go:128] duration metric: took 11.779435584s to createHost
	I1115 10:54:32.392225  615834 start.go:83] releasing machines lock for "ha-439113-m03", held for 11.779599443s
	I1115 10:54:32.392307  615834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 10:54:32.415833  615834 out.go:179] * Found network options:
	I1115 10:54:32.418534  615834 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 10:54:32.421302  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:54:32.421337  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:54:32.421361  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 10:54:32.421377  615834 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 10:54:32.421453  615834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:54:32.421499  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.421776  615834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:54:32.421830  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:54:32.447145  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.460578  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:54:32.608958  615834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:54:32.674684  615834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:54:32.674759  615834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:54:32.704172  615834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:54:32.704198  615834 start.go:496] detecting cgroup driver to use...
	I1115 10:54:32.704232  615834 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:54:32.704283  615834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:54:32.723324  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:54:32.737729  615834 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:54:32.737795  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:54:32.756038  615834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:54:32.775957  615834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:54:32.915213  615834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:54:33.052803  615834 docker.go:234] disabling docker service ...
	I1115 10:54:33.052907  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:54:33.078043  615834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:54:33.094926  615834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:54:33.230549  615834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:54:33.359393  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:54:33.372746  615834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:54:33.388589  615834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:54:33.388660  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.400545  615834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:54:33.400613  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.412976  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.422489  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.431851  615834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:54:33.441690  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.452824  615834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.469461  615834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:54:33.479689  615834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:54:33.487650  615834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:54:33.495844  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:54:33.622741  615834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:54:33.759405  615834 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:54:33.759527  615834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:54:33.763550  615834 start.go:564] Will wait 60s for crictl version
	I1115 10:54:33.763664  615834 ssh_runner.go:195] Run: which crictl
	I1115 10:54:33.767583  615834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:54:33.803949  615834 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:54:33.804117  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:54:33.834618  615834 ssh_runner.go:195] Run: crio --version
	I1115 10:54:33.872348  615834 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:54:33.875141  615834 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 10:54:33.878028  615834 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 10:54:33.880834  615834 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:54:33.898757  615834 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 10:54:33.902716  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:54:33.913050  615834 mustload.go:66] Loading cluster: ha-439113
	I1115 10:54:33.913297  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:33.913562  615834 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:54:33.932916  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:54:33.933195  615834 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.4
	I1115 10:54:33.933212  615834 certs.go:195] generating shared ca certs ...
	I1115 10:54:33.933228  615834 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:54:33.933349  615834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 10:54:33.933400  615834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 10:54:33.933414  615834 certs.go:257] generating profile certs ...
	I1115 10:54:33.933496  615834 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 10:54:33.933533  615834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1
	I1115 10:54:33.933550  615834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1115 10:54:34.392462  615834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1 ...
	I1115 10:54:34.392493  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1: {Name:mk57469c45faf40e8877724cc1e54dca438fdabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:54:34.392690  615834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1 ...
	I1115 10:54:34.392707  615834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1: {Name:mke21c76fcddbd31cd7b88d6b0fe560b003ef850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:54:34.392820  615834 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.9f17abc1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 10:54:34.392987  615834 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.9f17abc1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 10:54:34.393123  615834 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 10:54:34.393142  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 10:54:34.393159  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 10:54:34.393180  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 10:54:34.393192  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 10:54:34.393210  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 10:54:34.393228  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 10:54:34.393240  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 10:54:34.393258  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 10:54:34.393313  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 10:54:34.393346  615834 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 10:54:34.393360  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:54:34.393384  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 10:54:34.393407  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:54:34.393437  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 10:54:34.393481  615834 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 10:54:34.393513  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.393530  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:34.393541  615834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 10:54:34.393601  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:54:34.417321  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:54:34.517227  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 10:54:34.521295  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 10:54:34.529863  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 10:54:34.533585  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 10:54:34.542122  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 10:54:34.545856  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 10:54:34.554443  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 10:54:34.558198  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 10:54:34.567184  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 10:54:34.570858  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 10:54:34.579554  615834 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 10:54:34.583242  615834 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 10:54:34.592001  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:54:34.611784  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 10:54:34.632105  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:54:34.651510  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:54:34.679392  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1115 10:54:34.701186  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:54:34.721218  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:54:34.739838  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:54:34.758607  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 10:54:34.783858  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:54:34.804612  615834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 10:54:34.823088  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 10:54:34.836703  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 10:54:34.856372  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 10:54:34.869724  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 10:54:34.884327  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 10:54:34.898714  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 10:54:34.912320  615834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 10:54:34.928648  615834 ssh_runner.go:195] Run: openssl version
	I1115 10:54:34.936171  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 10:54:34.944931  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.949201  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.949309  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 10:54:34.990696  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:54:34.999528  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:54:35.008123  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:35.014850  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:35.014942  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:54:35.065078  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:54:35.074023  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 10:54:35.082594  615834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 10:54:35.086579  615834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 10:54:35.086700  615834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 10:54:35.128687  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 10:54:35.137533  615834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:54:35.141312  615834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:54:35.141414  615834 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1115 10:54:35.141513  615834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:54:35.141549  615834 kube-vip.go:115] generating kube-vip config ...
	I1115 10:54:35.141607  615834 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 10:54:35.154374  615834 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:54:35.154479  615834 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 10:54:35.154575  615834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:54:35.162846  615834 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:54:35.162920  615834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 10:54:35.171422  615834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 10:54:35.184497  615834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:54:35.198720  615834 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 10:54:35.221440  615834 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 10:54:35.226193  615834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:54:35.237290  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:54:35.357843  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:54:35.375474  615834 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:54:35.375810  615834 start.go:318] joinCluster: &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:54:35.375986  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1115 10:54:35.376046  615834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:54:35.394797  615834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:54:35.573637  615834 start.go:344] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:54:35.573737  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 57tqrb.oxnolth70l2ucbah --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I1115 10:54:57.764910  615834 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 57tqrb.oxnolth70l2ucbah --discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-439113-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (22.191150768s)
	I1115 10:54:57.764977  615834 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1115 10:54:58.434841  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-439113-m03 minikube.k8s.io/updated_at=2025_11_15T10_54_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=ha-439113 minikube.k8s.io/primary=false
	I1115 10:54:58.570609  615834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-439113-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1115 10:54:58.705795  615834 start.go:320] duration metric: took 23.329979868s to joinCluster
	I1115 10:54:58.705850  615834 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:54:58.706784  615834 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:54:58.708964  615834 out.go:179] * Verifying Kubernetes components...
	I1115 10:54:58.711919  615834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:54:58.903183  615834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:54:58.919643  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 10:54:58.919719  615834 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 10:54:58.920020  615834 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m03" to be "Ready" ...
	W1115 10:55:00.924525  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:03.423536  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:05.426488  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:07.924192  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:09.924349  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:12.424510  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:14.924474  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:17.423576  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:19.424146  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:21.923891  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:23.924593  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:25.924801  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:27.925292  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:29.926056  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:32.423860  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:34.424326  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:36.923529  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:38.924051  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	W1115 10:55:41.423264  615834 node_ready.go:57] node "ha-439113-m03" has "Ready":"False" status (will retry)
	I1115 10:55:42.423952  615834 node_ready.go:49] node "ha-439113-m03" is "Ready"
	I1115 10:55:42.423989  615834 node_ready.go:38] duration metric: took 43.503945735s for node "ha-439113-m03" to be "Ready" ...
	I1115 10:55:42.424005  615834 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:55:42.424111  615834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:55:42.440197  615834 api_server.go:72] duration metric: took 43.734318984s to wait for apiserver process to appear ...
	I1115 10:55:42.440226  615834 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:55:42.440245  615834 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 10:55:42.448913  615834 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 10:55:42.449893  615834 api_server.go:141] control plane version: v1.34.1
	I1115 10:55:42.449917  615834 api_server.go:131] duration metric: took 9.68478ms to wait for apiserver health ...
	I1115 10:55:42.449926  615834 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:55:42.456190  615834 system_pods.go:59] 24 kube-system pods found
	I1115 10:55:42.456222  615834 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:55:42.456229  615834 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:55:42.456234  615834 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:55:42.456238  615834 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:55:42.456243  615834 system_pods.go:61] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 10:55:42.456249  615834 system_pods.go:61] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 10:55:42.456259  615834 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:55:42.456264  615834 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:55:42.456271  615834 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:55:42.456276  615834 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:55:42.456289  615834 system_pods.go:61] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 10:55:42.456294  615834 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:55:42.456299  615834 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:55:42.456304  615834 system_pods.go:61] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 10:55:42.456313  615834 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:55:42.456317  615834 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:55:42.456321  615834 system_pods.go:61] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 10:55:42.456326  615834 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:55:42.456331  615834 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:55:42.456335  615834 system_pods.go:61] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 10:55:42.456343  615834 system_pods.go:61] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:55:42.456347  615834 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:55:42.456353  615834 system_pods.go:61] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 10:55:42.456358  615834 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:55:42.456366  615834 system_pods.go:74] duration metric: took 6.434166ms to wait for pod list to return data ...
	I1115 10:55:42.456381  615834 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:55:42.460030  615834 default_sa.go:45] found service account: "default"
	I1115 10:55:42.460053  615834 default_sa.go:55] duration metric: took 3.666881ms for default service account to be created ...
	I1115 10:55:42.460063  615834 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:55:42.466296  615834 system_pods.go:86] 24 kube-system pods found
	I1115 10:55:42.466327  615834 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running
	I1115 10:55:42.466334  615834 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running
	I1115 10:55:42.466339  615834 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 10:55:42.466343  615834 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 10:55:42.466347  615834 system_pods.go:89] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 10:55:42.466352  615834 system_pods.go:89] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 10:55:42.466357  615834 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 10:55:42.466361  615834 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running
	I1115 10:55:42.466371  615834 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 10:55:42.466376  615834 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 10:55:42.466383  615834 system_pods.go:89] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 10:55:42.466387  615834 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running
	I1115 10:55:42.466397  615834 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 10:55:42.466402  615834 system_pods.go:89] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 10:55:42.466408  615834 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running
	I1115 10:55:42.466412  615834 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 10:55:42.466425  615834 system_pods.go:89] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 10:55:42.466430  615834 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running
	I1115 10:55:42.466434  615834 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 10:55:42.466441  615834 system_pods.go:89] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 10:55:42.466445  615834 system_pods.go:89] "kube-vip-ha-439113" [397a8753-e06e-4144-882e-6bbf595950d8] Running
	I1115 10:55:42.466449  615834 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 10:55:42.466453  615834 system_pods.go:89] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 10:55:42.466459  615834 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running
	I1115 10:55:42.466465  615834 system_pods.go:126] duration metric: took 6.39762ms to wait for k8s-apps to be running ...
	I1115 10:55:42.466477  615834 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:55:42.466532  615834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:55:42.483083  615834 system_svc.go:56] duration metric: took 16.595924ms WaitForService to wait for kubelet
	I1115 10:55:42.483109  615834 kubeadm.go:587] duration metric: took 43.777236154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:55:42.483126  615834 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:55:42.486588  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:55:42.486618  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:55:42.486630  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:55:42.486634  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:55:42.486639  615834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:55:42.486643  615834 node_conditions.go:123] node cpu capacity is 2
	I1115 10:55:42.486648  615834 node_conditions.go:105] duration metric: took 3.516274ms to run NodePressure ...
	I1115 10:55:42.486661  615834 start.go:242] waiting for startup goroutines ...
	I1115 10:55:42.486686  615834 start.go:256] writing updated cluster config ...
	I1115 10:55:42.487017  615834 ssh_runner.go:195] Run: rm -f paused
	I1115 10:55:42.492297  615834 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:55:42.492803  615834 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:55:42.512652  615834 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.521118  615834 pod_ready.go:94] pod "coredns-66bc5c9577-4g6sm" is "Ready"
	I1115 10:55:42.521148  615834 pod_ready.go:86] duration metric: took 8.46948ms for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.521159  615834 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.530647  615834 pod_ready.go:94] pod "coredns-66bc5c9577-mlm6m" is "Ready"
	I1115 10:55:42.530675  615834 pod_ready.go:86] duration metric: took 9.510034ms for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.534052  615834 pod_ready.go:83] waiting for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.540869  615834 pod_ready.go:94] pod "etcd-ha-439113" is "Ready"
	I1115 10:55:42.540905  615834 pod_ready.go:86] duration metric: took 6.827976ms for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.540914  615834 pod_ready.go:83] waiting for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.547047  615834 pod_ready.go:94] pod "etcd-ha-439113-m02" is "Ready"
	I1115 10:55:42.547075  615834 pod_ready.go:86] duration metric: took 6.153818ms for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.547085  615834 pod_ready.go:83] waiting for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:42.694101  615834 request.go:683] "Waited before sending request" delay="146.197061ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-439113-m03"
	I1115 10:55:42.893874  615834 request.go:683] "Waited before sending request" delay="196.290075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:42.897978  615834 pod_ready.go:94] pod "etcd-ha-439113-m03" is "Ready"
	I1115 10:55:42.898008  615834 pod_ready.go:86] duration metric: took 350.916746ms for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.093314  615834 request.go:683] "Waited before sending request" delay="195.208873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 10:55:43.097536  615834 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.293993  615834 request.go:683] "Waited before sending request" delay="196.352501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113"
	I1115 10:55:43.493646  615834 request.go:683] "Waited before sending request" delay="196.260142ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:43.496635  615834 pod_ready.go:94] pod "kube-apiserver-ha-439113" is "Ready"
	I1115 10:55:43.496659  615834 pod_ready.go:86] duration metric: took 399.090863ms for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.496669  615834 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.694042  615834 request.go:683] "Waited before sending request" delay="197.29025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m02"
	I1115 10:55:43.893795  615834 request.go:683] "Waited before sending request" delay="196.360467ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:43.897823  615834 pod_ready.go:94] pod "kube-apiserver-ha-439113-m02" is "Ready"
	I1115 10:55:43.897863  615834 pod_ready.go:86] duration metric: took 401.185344ms for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:43.897873  615834 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.094259  615834 request.go:683] "Waited before sending request" delay="196.313772ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m03"
	I1115 10:55:44.293320  615834 request.go:683] "Waited before sending request" delay="195.273875ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:44.297020  615834 pod_ready.go:94] pod "kube-apiserver-ha-439113-m03" is "Ready"
	I1115 10:55:44.297051  615834 pod_ready.go:86] duration metric: took 399.170241ms for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.493352  615834 request.go:683] "Waited before sending request" delay="196.168342ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 10:55:44.497474  615834 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.693864  615834 request.go:683] "Waited before sending request" delay="196.265788ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 10:55:44.893500  615834 request.go:683] "Waited before sending request" delay="196.2207ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:44.897637  615834 pod_ready.go:94] pod "kube-controller-manager-ha-439113" is "Ready"
	I1115 10:55:44.897665  615834 pod_ready.go:86] duration metric: took 400.156714ms for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:44.897677  615834 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.096158  615834 request.go:683] "Waited before sending request" delay="198.388069ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113-m02"
	I1115 10:55:45.294224  615834 request.go:683] "Waited before sending request" delay="191.268446ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:45.298265  615834 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m02" is "Ready"
	I1115 10:55:45.298296  615834 pod_ready.go:86] duration metric: took 400.61198ms for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.298307  615834 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.493762  615834 request.go:683] "Waited before sending request" delay="195.347377ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113-m03"
	I1115 10:55:45.693315  615834 request.go:683] "Waited before sending request" delay="196.157273ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:45.696935  615834 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m03" is "Ready"
	I1115 10:55:45.696960  615834 pod_ready.go:86] duration metric: took 398.646459ms for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:45.893314  615834 request.go:683] "Waited before sending request" delay="196.244659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1115 10:55:45.898174  615834 pod_ready.go:83] waiting for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.093461  615834 request.go:683] "Waited before sending request" delay="195.191183ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k7bcn"
	I1115 10:55:46.293301  615834 request.go:683] "Waited before sending request" delay="196.162237ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:46.297337  615834 pod_ready.go:94] pod "kube-proxy-k7bcn" is "Ready"
	I1115 10:55:46.297371  615834 pod_ready.go:86] duration metric: took 399.168321ms for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.297380  615834 pod_ready.go:83] waiting for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.493781  615834 request.go:683] "Waited before sending request" delay="196.313435ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgftx"
	I1115 10:55:46.693593  615834 request.go:683] "Waited before sending request" delay="196.546283ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:46.699555  615834 pod_ready.go:94] pod "kube-proxy-kgftx" is "Ready"
	I1115 10:55:46.699584  615834 pod_ready.go:86] duration metric: took 402.19773ms for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.699594  615834 pod_ready.go:83] waiting for pod "kube-proxy-njlxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:46.893960  615834 request.go:683] "Waited before sending request" delay="194.292628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-njlxj"
	I1115 10:55:47.093699  615834 request.go:683] "Waited before sending request" delay="196.242706ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:47.097062  615834 pod_ready.go:94] pod "kube-proxy-njlxj" is "Ready"
	I1115 10:55:47.097099  615834 pod_ready.go:86] duration metric: took 397.498607ms for pod "kube-proxy-njlxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.293346  615834 request.go:683] "Waited before sending request" delay="196.125125ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1115 10:55:47.297041  615834 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.493398  615834 request.go:683] "Waited before sending request" delay="196.251543ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113"
	I1115 10:55:47.694024  615834 request.go:683] "Waited before sending request" delay="197.311831ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 10:55:47.697567  615834 pod_ready.go:94] pod "kube-scheduler-ha-439113" is "Ready"
	I1115 10:55:47.697592  615834 pod_ready.go:86] duration metric: took 400.52343ms for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.697602  615834 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:47.894055  615834 request.go:683] "Waited before sending request" delay="196.361846ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 10:55:48.093904  615834 request.go:683] "Waited before sending request" delay="195.321687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 10:55:48.097248  615834 pod_ready.go:94] pod "kube-scheduler-ha-439113-m02" is "Ready"
	I1115 10:55:48.097282  615834 pod_ready.go:86] duration metric: took 399.672892ms for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:48.097293  615834 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:48.293745  615834 request.go:683] "Waited before sending request" delay="196.348718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m03"
	I1115 10:55:48.493608  615834 request.go:683] "Waited before sending request" delay="196.332299ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 10:55:48.496763  615834 pod_ready.go:94] pod "kube-scheduler-ha-439113-m03" is "Ready"
	I1115 10:55:48.496836  615834 pod_ready.go:86] duration metric: took 399.525477ms for pod "kube-scheduler-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:55:48.496916  615834 pod_ready.go:40] duration metric: took 6.00458265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:55:48.566701  615834 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:55:48.569888  615834 out.go:179] * Done! kubectl is now configured to use "ha-439113" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:53:32 ha-439113 crio[839]: time="2025-11-15T10:53:32.212255023Z" level=info msg="Created container ebc82b2592dea9050aa85b52fa9673230a41ffc541b1a9be7f57add5a41661ef: kube-system/storage-provisioner/storage-provisioner" id=c517296d-bd9b-4dd7-ad7d-ff27ad3f16a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:53:32 ha-439113 crio[839]: time="2025-11-15T10:53:32.213468255Z" level=info msg="Starting container: ebc82b2592dea9050aa85b52fa9673230a41ffc541b1a9be7f57add5a41661ef" id=80cb212e-b4cc-44ae-8599-da6627d6502b name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:53:32 ha-439113 crio[839]: time="2025-11-15T10:53:32.215539196Z" level=info msg="Started container" PID=1833 containerID=ebc82b2592dea9050aa85b52fa9673230a41ffc541b1a9be7f57add5a41661ef description=kube-system/storage-provisioner/storage-provisioner id=80cb212e-b4cc-44ae-8599-da6627d6502b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b490c9b037c7b899eacaef5b671bb76b4b6a5cd04156c3467f671dd334f6b230
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.631309762Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-vddcm/POD" id=114264a6-9671-4fa2-9ed7-ad5ab056ed9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.631381419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.64104073Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-vddcm Namespace:default ID:9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 UID:92adc10b-e910-45d1-8267-ee2e884d0dcc NetNS:/var/run/netns/e82fa3ee-f2c7-4bec-bc77-3640c59596cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000138570}] Aliases:map[]}"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.641091052Z" level=info msg="Adding pod default_busybox-7b57f96db7-vddcm to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.661653537Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-vddcm Namespace:default ID:9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 UID:92adc10b-e910-45d1-8267-ee2e884d0dcc NetNS:/var/run/netns/e82fa3ee-f2c7-4bec-bc77-3640c59596cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000138570}] Aliases:map[]}"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.662092549Z" level=info msg="Checking pod default_busybox-7b57f96db7-vddcm for CNI network kindnet (type=ptp)"
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.667262676Z" level=info msg="Ran pod sandbox 9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 with infra container: default/busybox-7b57f96db7-vddcm/POD" id=114264a6-9671-4fa2-9ed7-ad5ab056ed9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.668959811Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=8ecc1fc7-1996-45cc-9d8e-ac6a7fd74c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.669233808Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=8ecc1fc7-1996-45cc-9d8e-ac6a7fd74c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.669282695Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=8ecc1fc7-1996-45cc-9d8e-ac6a7fd74c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.67127745Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=e4852942-9d64-4f44-8ef8-40df183d7f24 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:55:51 ha-439113 crio[839]: time="2025-11-15T10:55:51.676540748Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.775299322Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=e4852942-9d64-4f44-8ef8-40df183d7f24 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.777574318Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c4332747-14d1-4cdb-ae76-b4cb071a9e81 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.779308927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=87342736-f913-41d9-a9a6-1048cf8ee9e0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.784697674Z" level=info msg="Creating container: default/busybox-7b57f96db7-vddcm/busybox" id=4c7f054c-47ae-4bfe-8ca3-cfc91b62c944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.785046347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.79768124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.801962496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.821311715Z" level=info msg="Created container 3f6eb171bd0175882d73d20d75a54b3a72cb956bd407e8095a60998cd1a10870: default/busybox-7b57f96db7-vddcm/busybox" id=4c7f054c-47ae-4bfe-8ca3-cfc91b62c944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.822504287Z" level=info msg="Starting container: 3f6eb171bd0175882d73d20d75a54b3a72cb956bd407e8095a60998cd1a10870" id=d2311b24-aa7d-4466-af8b-25c404bf84c7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:55:53 ha-439113 crio[839]: time="2025-11-15T10:55:53.825172125Z" level=info msg="Started container" PID=2006 containerID=3f6eb171bd0175882d73d20d75a54b3a72cb956bd407e8095a60998cd1a10870 description=default/busybox-7b57f96db7-vddcm/busybox id=d2311b24-aa7d-4466-af8b-25c404bf84c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3f6eb171bd017       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   11 minutes ago      Running             busybox                   0                   9a1924d1444fc       busybox-7b57f96db7-vddcm            default
	e034410e44a50       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 minutes ago      Running             coredns                   0                   220741ce57653       coredns-66bc5c9577-mlm6m            kube-system
	ebc82b2592dea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 minutes ago      Running             storage-provisioner       0                   b490c9b037c7b       storage-provisioner                 kube-system
	1bba46622cf08       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 minutes ago      Running             coredns                   0                   14afa271db53e       coredns-66bc5c9577-4g6sm            kube-system
	c5041c1c9a7b2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      14 minutes ago      Running             kindnet-cni               0                   929899784c659       kindnet-q4kpj                       kube-system
	32eb60c7f45d9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      14 minutes ago      Running             kube-proxy                0                   fb96f9b749aa4       kube-proxy-k7bcn                    kube-system
	f6362682174af       ghcr.io/kube-vip/kube-vip@sha256:a9c131fb1bd4690cd4563761c2f545eb89b92cc8ea19aec96c833d1b4b0211eb     14 minutes ago      Running             kube-vip                  0                   28ed11a5928a1       kube-vip-ha-439113                  kube-system
	3460218d601a4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      14 minutes ago      Running             kube-scheduler            0                   1917432d67012       kube-scheduler-ha-439113            kube-system
	12d1c250e31ea       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      14 minutes ago      Running             kube-controller-manager   0                   ecd91e2412183       kube-controller-manager-ha-439113   kube-system
	07ac2a5381c76       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      14 minutes ago      Running             kube-apiserver            0                   e329af05eba97       kube-apiserver-ha-439113            kube-system
	f4035d6f71e56       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      14 minutes ago      Running             etcd                      0                   f73860106416b       etcd-ha-439113                      kube-system
	
	
	==> coredns [1bba46622cf0862562b963eed4ad3b12dbcc4badddbf0a0b56dee4a1b3c9b955] <==
	[INFO] 10.244.2.2:35766 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000098167s
	[INFO] 10.244.1.3:60510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154644s
	[INFO] 10.244.1.3:39741 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002283168s
	[INFO] 10.244.1.3:54024 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130906s
	[INFO] 10.244.1.3:55209 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105027s
	[INFO] 10.244.1.3:35197 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001586463s
	[INFO] 10.244.1.3:45473 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115276s
	[INFO] 10.244.1.3:34424 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106496s
	[INFO] 10.244.0.4:37123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009299s
	[INFO] 10.244.0.4:49387 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001246841s
	[INFO] 10.244.0.4:38072 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001117117s
	[INFO] 10.244.2.2:33563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000252852s
	[INFO] 10.244.2.2:40237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125253s
	[INFO] 10.244.1.3:34350 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126508s
	[INFO] 10.244.1.3:39952 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131957s
	[INFO] 10.244.1.3:38822 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112092s
	[INFO] 10.244.0.4:45556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157327s
	[INFO] 10.244.0.4:57618 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145249s
	[INFO] 10.244.2.2:33582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106922s
	[INFO] 10.244.2.2:37235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165556s
	[INFO] 10.244.1.3:39333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132473s
	[INFO] 10.244.1.3:52420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000092711s
	[INFO] 10.244.0.4:51209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068202s
	[INFO] 10.244.0.4:54534 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090282s
	[INFO] 10.244.2.2:51431 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069769s
	
	
	==> coredns [e034410e44a50c4b37d4c79d28f641bcd3feafc9353b925fffc80b38b5c23d67] <==
	[INFO] 10.244.2.2:58996 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000063582s
	[INFO] 10.244.1.3:43592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156514s
	[INFO] 10.244.0.4:46132 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131341s
	[INFO] 10.244.0.4:43399 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103583s
	[INFO] 10.244.0.4:40629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097371s
	[INFO] 10.244.0.4:46835 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093392s
	[INFO] 10.244.0.4:56743 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063279s
	[INFO] 10.244.2.2:53643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122127s
	[INFO] 10.244.2.2:33972 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001226434s
	[INFO] 10.244.2.2:35377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169536s
	[INFO] 10.244.2.2:47011 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138242s
	[INFO] 10.244.2.2:42897 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001142306s
	[INFO] 10.244.2.2:33366 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159305s
	[INFO] 10.244.1.3:56891 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120305s
	[INFO] 10.244.0.4:47049 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221s
	[INFO] 10.244.0.4:47618 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073724s
	[INFO] 10.244.2.2:35579 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215461s
	[INFO] 10.244.2.2:44191 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00017619s
	[INFO] 10.244.1.3:45635 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185799s
	[INFO] 10.244.1.3:37107 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145857s
	[INFO] 10.244.0.4:48143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147284s
	[INFO] 10.244.0.4:55785 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084629s
	[INFO] 10.244.2.2:41258 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111205s
	[INFO] 10.244.2.2:40201 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120871s
	[INFO] 10.244.2.2:44090 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084333s
	
	
	==> describe nodes <==
	Name:               ha-439113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_52_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:52:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:06:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:04:27 +0000   Sat, 15 Nov 2025 10:53:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-439113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6518a9f9-bb2d-42ae-b78a-3db01b5306a4
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vddcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-4g6sm             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-mlm6m             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-439113                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-q4kpj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-439113             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-439113    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-k7bcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-439113             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-439113                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           13m   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeReady                13m   kubelet          Node ha-439113 status is now: NodeReady
	  Normal   RegisteredNode           11m   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	
	
	Name:               ha-439113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_53_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:53:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:57:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 15 Nov 2025 10:56:19 +0000   Sat, 15 Nov 2025 10:58:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-439113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d3455c64-e9a7-4ebe-b716-3cc9dc8ab51a
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5xw75                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-439113-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-mcj42                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-439113-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-439113-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-kgftx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-439113-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-439113-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        13m    kube-proxy       
	  Normal  RegisteredNode  13m    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal  NodeNotReady    8m13s  node-controller  Node ha-439113-m02 status is now: NodeNotReady
	
	
	Name:               ha-439113-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_54_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:54:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:06:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:06:51 +0000   Sat, 15 Nov 2025 10:54:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:06:51 +0000   Sat, 15 Nov 2025 10:54:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:06:51 +0000   Sat, 15 Nov 2025 10:54:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:06:51 +0000   Sat, 15 Nov 2025 10:55:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-439113-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                a83b5435-8c2a-4b27-b1ef-b4733d66b86e
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vk6xz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-439113-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-kxl4t                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-439113-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-439113-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-njlxj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-439113-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-439113-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        11m   kube-proxy       
	  Normal  RegisteredNode  11m   node-controller  Node ha-439113-m03 event: Registered Node ha-439113-m03 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node ha-439113-m03 event: Registered Node ha-439113-m03 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node ha-439113-m03 event: Registered Node ha-439113-m03 in Controller
	
	
	Name:               ha-439113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_56_52_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:56:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:06:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:06:54 +0000   Sat, 15 Nov 2025 10:56:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:06:54 +0000   Sat, 15 Nov 2025 10:56:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:06:54 +0000   Sat, 15 Nov 2025 10:56:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:06:54 +0000   Sat, 15 Nov 2025 10:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-439113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                bf4456d3-e8dc-4a97-8e4f-cb829c9a4b90
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-trswm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  kube-system                 kindnet-4k2k2               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2fgtm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           10m                node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           9m58s              node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeReady                9m20s              kubelet          Node ha-439113-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[Nov15 10:39] overlayfs: idmapped layers are currently not supported
	[Nov15 10:52] overlayfs: idmapped layers are currently not supported
	[Nov15 10:53] overlayfs: idmapped layers are currently not supported
	[Nov15 10:54] overlayfs: idmapped layers are currently not supported
	[Nov15 10:56] overlayfs: idmapped layers are currently not supported
	[Nov15 10:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f4035d6f71e56ba53b8d8060485a468d1faf9b1a3bdfedd8aa7da86be584ec11] <==
	{"level":"warn","ts":"2025-11-15T10:59:38.284419Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:40.514810Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:40.514867Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:43.285400Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:43.285412Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"10ee04674cfb0a09","rtt":"20.709005ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:44.516442Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:44.516498Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.285675Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"10ee04674cfb0a09","rtt":"20.709005ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.285682Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.517898Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:48.518036Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:52.519935Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:52.519998Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"10ee04674cfb0a09","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:53.288377Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"10ee04674cfb0a09","rtt":"1.668739ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T10:59:53.288448Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"10ee04674cfb0a09","rtt":"20.709005ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-15T10:59:53.394242Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"10ee04674cfb0a09","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-15T10:59:53.394297Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.394315Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.449798Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"10ee04674cfb0a09","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-15T10:59:53.449959Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.478088Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T10:59:53.482447Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"10ee04674cfb0a09"}
	{"level":"info","ts":"2025-11-15T11:02:37.720701Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1629}
	{"level":"info","ts":"2025-11-15T11:02:37.758076Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1629,"took":"36.86288ms","hash":2376258938,"current-db-size-bytes":4849664,"current-db-size":"4.8 MB","current-db-size-in-use-bytes":3022848,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-11-15T11:02:37.758132Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2376258938,"revision":1629,"compact-revision":-1}
	
	
	==> kernel <==
	 11:06:54 up  2:49,  0 user,  load average: 0.96, 1.03, 1.33
	Linux ha-439113 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c5041c1c9a7b23abf75df1eb1474d03e4c704bf14133dc981ee08a378b3e3397] <==
	I1115 11:06:21.299758       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:06:31.303570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:06:31.303603       1 main.go:301] handling current node
	I1115 11:06:31.303621       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:06:31.303627       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:06:31.304021       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 11:06:31.304040       1 main.go:324] Node ha-439113-m03 has CIDR [10.244.2.0/24] 
	I1115 11:06:31.304273       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:06:31.304346       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:06:41.304231       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:06:41.304273       1 main.go:301] handling current node
	I1115 11:06:41.304288       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:06:41.304294       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:06:41.304467       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 11:06:41.304480       1 main.go:324] Node ha-439113-m03 has CIDR [10.244.2.0/24] 
	I1115 11:06:41.304546       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:06:41.304557       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:06:51.298462       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:06:51.298514       1 main.go:301] handling current node
	I1115 11:06:51.298539       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:06:51.298629       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:06:51.298808       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 11:06:51.298816       1 main.go:324] Node ha-439113-m03 has CIDR [10.244.2.0/24] 
	I1115 11:06:51.298917       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:06:51.298924       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63] <==
	I1115 10:52:42.358842       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:52:42.427732       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:52:42.578242       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:52:42.586911       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1115 10:52:42.588308       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:52:42.593909       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:52:42.739735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:52:43.727370       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:52:43.749842       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:52:43.764070       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:52:47.895686       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:52:48.597560       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:52:48.602925       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:52:48.744987       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1115 10:56:32.235267       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37242: use of closed network connection
	E1115 10:56:32.465177       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37262: use of closed network connection
	E1115 10:56:32.933377       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37298: use of closed network connection
	E1115 10:56:33.367452       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37338: use of closed network connection
	E1115 10:56:33.585138       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37360: use of closed network connection
	E1115 10:56:33.989276       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37390: use of closed network connection
	E1115 10:56:34.244482       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37412: use of closed network connection
	E1115 10:56:34.492173       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37444: use of closed network connection
	E1115 10:56:35.101366       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:37488: use of closed network connection
	W1115 10:58:12.601678       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1115 11:02:40.821095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [12d1c250e31ea78318f046f42fa718353d22cf0f3dd2a251f9cbcdfbdbabd3a3] <==
	I1115 10:52:47.788439       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:52:47.788989       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:52:47.789070       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:52:47.791552       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:52:47.792313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:52:47.792374       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:52:47.795130       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:52:47.792745       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:52:47.792733       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:52:47.801484       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:52:47.803070       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:53:25.873159       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-439113-m02\" does not exist"
	I1115 10:53:25.932933       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-439113-m02" podCIDRs=["10.244.1.0/24"]
	I1115 10:53:27.742756       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-439113-m02"
	I1115 10:53:32.743622       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1115 10:54:57.108169       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-p5cmb failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-p5cmb\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1115 10:54:57.525205       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-439113-m03\" does not exist"
	I1115 10:54:57.562807       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-439113-m03" podCIDRs=["10.244.2.0/24"]
	I1115 10:54:57.784268       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-439113-m03"
	I1115 10:56:52.148374       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-439113-m04\" does not exist"
	I1115 10:56:52.176237       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-439113-m04" podCIDRs=["10.244.3.0/24"]
	I1115 10:56:52.827320       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-439113-m04"
	I1115 10:57:34.083635       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	I1115 10:58:41.325391       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	I1115 11:03:41.408881       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-5xw75"
	
	
	==> kube-proxy [32eb60c7f45d998b805a27e4338741aca603eaf9a27e0a65e24b5cf620344940] <==
	I1115 10:52:51.089191       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:52:51.198904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:52:51.304092       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:52:51.304125       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 10:52:51.304192       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:52:51.394214       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:52:51.394269       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:52:51.399273       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:52:51.399691       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:52:51.399707       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:52:51.401156       1 config.go:200] "Starting service config controller"
	I1115 10:52:51.401166       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:52:51.401182       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:52:51.401186       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:52:51.401208       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:52:51.401212       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:52:51.405646       1 config.go:309] "Starting node config controller"
	I1115 10:52:51.405679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:52:51.405687       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:52:51.502235       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:52:51.502271       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:52:51.502321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3460218d601a408c63f0ca5447c707456f5f810e7087fe7d37e58f8fc647abde] <==
	E1115 10:54:58.090815       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 8710afa6-4666-4dcd-a332-94b9d399b6ea(kube-system/kindnet-8vpd2) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8vpd2"
	E1115 10:54:58.090838       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8vpd2\": pod kindnet-8vpd2 is already assigned to node \"ha-439113-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8vpd2"
	I1115 10:54:58.091932       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8vpd2" node="ha-439113-m03"
	E1115 10:54:58.092605       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdlw\": pod kube-proxy-9qdlw is already assigned to node \"ha-439113-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9qdlw" node="ha-439113-m03"
	E1115 10:54:58.092714       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 6ae3b63e-9e94-4ba4-bf3d-2327ace904b9(kube-system/kube-proxy-9qdlw) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-9qdlw"
	E1115 10:54:58.092772       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdlw\": pod kube-proxy-9qdlw is already assigned to node \"ha-439113-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-9qdlw"
	I1115 10:54:58.094597       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9qdlw" node="ha-439113-m03"
	I1115 10:55:49.850353       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="fcf06a02-6f97-4f03-972d-b514907c4bad" pod="default/busybox-7b57f96db7-b2f5h" assumedNode="ha-439113-m02" currentNode="ha-439113-m03"
	E1115 10:55:49.899106       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-b2f5h\": pod busybox-7b57f96db7-b2f5h is already assigned to node \"ha-439113-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-b2f5h" node="ha-439113-m03"
	E1115 10:55:49.899164       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod fcf06a02-6f97-4f03-972d-b514907c4bad(default/busybox-7b57f96db7-b2f5h) was assumed on ha-439113-m03 but assigned to ha-439113-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-b2f5h"
	E1115 10:55:49.899186       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-b2f5h\": pod busybox-7b57f96db7-b2f5h is already assigned to node \"ha-439113-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-b2f5h"
	E1115 10:55:49.899130       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5xw75\": pod busybox-7b57f96db7-5xw75 is already assigned to node \"ha-439113-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5xw75" node="ha-439113-m02"
	E1115 10:55:49.899335       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5xw75\": pod busybox-7b57f96db7-5xw75 is already assigned to node \"ha-439113-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5xw75"
	I1115 10:55:49.900202       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-b2f5h" node="ha-439113-m02"
	I1115 10:55:49.900927       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5xw75" node="ha-439113-m02"
	E1115 10:55:49.971135       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vk6xz\": pod busybox-7b57f96db7-vk6xz is already assigned to node \"ha-439113-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-vk6xz" node="ha-439113-m03"
	E1115 10:55:49.971376       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vk6xz\": pod busybox-7b57f96db7-vk6xz is already assigned to node \"ha-439113-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-vk6xz"
	E1115 10:55:50.011710       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-pvdw4\": pod busybox-7b57f96db7-pvdw4 is already assigned to node \"ha-439113\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-pvdw4" node="ha-439113"
	E1115 10:55:50.013170       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d6954577-fecf-4f6c-adb6-15227667c812(default/busybox-7b57f96db7-pvdw4) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-pvdw4"
	E1115 10:55:50.013287       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-pvdw4\": pod busybox-7b57f96db7-pvdw4 is already assigned to node \"ha-439113\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-pvdw4"
	I1115 10:55:50.014554       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-pvdw4" node="ha-439113"
	E1115 10:55:51.333063       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vddcm\": pod busybox-7b57f96db7-vddcm is already assigned to node \"ha-439113\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-vddcm" node="ha-439113"
	E1115 10:55:51.333129       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 92adc10b-e910-45d1-8267-ee2e884d0dcc(default/busybox-7b57f96db7-vddcm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-vddcm"
	E1115 10:55:51.333149       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-vddcm\": pod busybox-7b57f96db7-vddcm is already assigned to node \"ha-439113\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-vddcm"
	I1115 10:55:51.334731       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-vddcm" node="ha-439113"
	
	
	==> kubelet <==
	Nov 15 10:52:50 ha-439113 kubelet[1353]: E1115 10:52:50.097170    1353 projected.go:196] Error preparing data for projected volume kube-api-access-7whdk for pod kube-system/kindnet-q4kpj: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:52:50 ha-439113 kubelet[1353]: E1115 10:52:50.097261    1353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5da9cefc-49b3-4bc2-8cb6-db44ed04b358-kube-api-access-7whdk podName:5da9cefc-49b3-4bc2-8cb6-db44ed04b358 nodeName:}" failed. No retries permitted until 2025-11-15 10:52:50.597235726 +0000 UTC m=+7.072615896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7whdk" (UniqueName: "kubernetes.io/projected/5da9cefc-49b3-4bc2-8cb6-db44ed04b358-kube-api-access-7whdk") pod "kindnet-q4kpj" (UID: "5da9cefc-49b3-4bc2-8cb6-db44ed04b358") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:52:50 ha-439113 kubelet[1353]: I1115 10:52:50.615777    1353 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:52:51 ha-439113 kubelet[1353]: I1115 10:52:51.867690    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k7bcn" podStartSLOduration=3.8676628490000002 podStartE2EDuration="3.867662849s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:52:51.821689898 +0000 UTC m=+8.297070076" watchObservedRunningTime="2025-11-15 10:52:51.867662849 +0000 UTC m=+8.343043027"
	Nov 15 10:52:53 ha-439113 kubelet[1353]: I1115 10:52:53.678682    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-q4kpj" podStartSLOduration=5.678665184 podStartE2EDuration="5.678665184s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:52:51.875525758 +0000 UTC m=+8.350905936" watchObservedRunningTime="2025-11-15 10:52:53.678665184 +0000 UTC m=+10.154045354"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.614360    1353 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.730975    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9460f377-28d8-418c-9dab-9428dfbfca1d-config-volume\") pod \"coredns-66bc5c9577-4g6sm\" (UID: \"9460f377-28d8-418c-9dab-9428dfbfca1d\") " pod="kube-system/coredns-66bc5c9577-4g6sm"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.731041    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xlh\" (UniqueName: \"kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh\") pod \"coredns-66bc5c9577-4g6sm\" (UID: \"9460f377-28d8-418c-9dab-9428dfbfca1d\") " pod="kube-system/coredns-66bc5c9577-4g6sm"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832138    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6a63ca66-7de2-40d8-96f0-a99da4ba3411-tmp\") pod \"storage-provisioner\" (UID: \"6a63ca66-7de2-40d8-96f0-a99da4ba3411\") " pod="kube-system/storage-provisioner"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832371    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5j8\" (UniqueName: \"kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8\") pod \"storage-provisioner\" (UID: \"6a63ca66-7de2-40d8-96f0-a99da4ba3411\") " pod="kube-system/storage-provisioner"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832501    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whw9c\" (UniqueName: \"kubernetes.io/projected/d28d9bc0-5e46-4c01-8b62-aa0ef429d935-kube-api-access-whw9c\") pod \"coredns-66bc5c9577-mlm6m\" (UID: \"d28d9bc0-5e46-4c01-8b62-aa0ef429d935\") " pod="kube-system/coredns-66bc5c9577-mlm6m"
	Nov 15 10:53:31 ha-439113 kubelet[1353]: I1115 10:53:31.832592    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d28d9bc0-5e46-4c01-8b62-aa0ef429d935-config-volume\") pod \"coredns-66bc5c9577-mlm6m\" (UID: \"d28d9bc0-5e46-4c01-8b62-aa0ef429d935\") " pod="kube-system/coredns-66bc5c9577-mlm6m"
	Nov 15 10:53:32 ha-439113 kubelet[1353]: W1115 10:53:32.013379    1353 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-14afa271db53e61f2103fdadfb4f751dd350cbe116c6e2a8db9c7e7f10867d2f WatchSource:0}: Error finding container 14afa271db53e61f2103fdadfb4f751dd350cbe116c6e2a8db9c7e7f10867d2f: Status 404 returned error can't find the container with id 14afa271db53e61f2103fdadfb4f751dd350cbe116c6e2a8db9c7e7f10867d2f
	Nov 15 10:53:32 ha-439113 kubelet[1353]: W1115 10:53:32.080700    1353 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-220741ce57653bd04b151a021265cc7a8a5489293e3386014b55a8cac8ec57a2 WatchSource:0}: Error finding container 220741ce57653bd04b151a021265cc7a8a5489293e3386014b55a8cac8ec57a2: Status 404 returned error can't find the container with id 220741ce57653bd04b151a021265cc7a8a5489293e3386014b55a8cac8ec57a2
	Nov 15 10:53:32 ha-439113 kubelet[1353]: I1115 10:53:32.924093    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mlm6m" podStartSLOduration=44.924072498 podStartE2EDuration="44.924072498s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:53:32.920760903 +0000 UTC m=+49.396141098" watchObservedRunningTime="2025-11-15 10:53:32.924072498 +0000 UTC m=+49.399452668"
	Nov 15 10:53:32 ha-439113 kubelet[1353]: I1115 10:53:32.925249    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.925231822 podStartE2EDuration="43.925231822s" podCreationTimestamp="2025-11-15 10:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:53:32.902380975 +0000 UTC m=+49.377761202" watchObservedRunningTime="2025-11-15 10:53:32.925231822 +0000 UTC m=+49.400612009"
	Nov 15 10:53:33 ha-439113 kubelet[1353]: I1115 10:53:33.067135    1353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4g6sm" podStartSLOduration=45.067113471 podStartE2EDuration="45.067113471s" podCreationTimestamp="2025-11-15 10:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:53:32.95642295 +0000 UTC m=+49.431803169" watchObservedRunningTime="2025-11-15 10:53:33.067113471 +0000 UTC m=+49.542493640"
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.079046    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdm4s\" (UniqueName: \"kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s\") pod \"busybox-7b57f96db7-pvdw4\" (UID: \"d6954577-fecf-4f6c-adb6-15227667c812\") " pod="default/busybox-7b57f96db7-pvdw4"
	Nov 15 10:55:50 ha-439113 kubelet[1353]: E1115 10:55:50.231385    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-vdm4s], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-7b57f96db7-pvdw4" podUID="d6954577-fecf-4f6c-adb6-15227667c812"
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.384749    1353 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdm4s\" (UniqueName: \"kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s\") pod \"d6954577-fecf-4f6c-adb6-15227667c812\" (UID: \"d6954577-fecf-4f6c-adb6-15227667c812\") "
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.389992    1353 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s" (OuterVolumeSpecName: "kube-api-access-vdm4s") pod "d6954577-fecf-4f6c-adb6-15227667c812" (UID: "d6954577-fecf-4f6c-adb6-15227667c812"). InnerVolumeSpecName "kube-api-access-vdm4s". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 10:55:50 ha-439113 kubelet[1353]: I1115 10:55:50.485933    1353 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vdm4s\" (UniqueName: \"kubernetes.io/projected/d6954577-fecf-4f6c-adb6-15227667c812-kube-api-access-vdm4s\") on node \"ha-439113\" DevicePath \"\""
	Nov 15 10:55:51 ha-439113 kubelet[1353]: I1115 10:55:51.396048    1353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ghqb\" (UniqueName: \"kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb\") pod \"busybox-7b57f96db7-vddcm\" (UID: \"92adc10b-e910-45d1-8267-ee2e884d0dcc\") " pod="default/busybox-7b57f96db7-vddcm"
	Nov 15 10:55:51 ha-439113 kubelet[1353]: I1115 10:55:51.651287    1353 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6954577-fecf-4f6c-adb6-15227667c812" path="/var/lib/kubelet/pods/d6954577-fecf-4f6c-adb6-15227667c812/volumes"
	Nov 15 10:55:51 ha-439113 kubelet[1353]: W1115 10:55:51.666962    1353 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4 WatchSource:0}: Error finding container 9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4: Status 404 returned error can't find the container with id 9a1924d1444fcffae5090157aa58d5f2c625d79ed31a0ff72e3fd6567559a4d4
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-439113 -n ha-439113
helpers_test.go:269: (dbg) Run:  kubectl --context ha-439113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (366.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1115 11:11:22.372591  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:12:23.199559  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:12:45.439805  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:14:20.130053  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m3.903922522s)

                                                
                                                
-- stdout --
	* [ha-439113] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-439113" primary control-plane node in "ha-439113" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-439113-m04" worker node in "ha-439113" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:10:01.082148  644414 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:10:01.082358  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082389  644414 out.go:374] Setting ErrFile to fd 2...
	I1115 11:10:01.082410  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082810  644414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:10:01.083841  644414 out.go:368] Setting JSON to false
	I1115 11:10:01.084783  644414 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10352,"bootTime":1763194649,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:10:01.084926  644414 start.go:143] virtualization:  
	I1115 11:10:01.088178  644414 out.go:179] * [ha-439113] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:10:01.092058  644414 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:10:01.092190  644414 notify.go:221] Checking for updates...
	I1115 11:10:01.098137  644414 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:10:01.101114  644414 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:01.104087  644414 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:10:01.107082  644414 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:10:01.110104  644414 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:10:01.113527  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:01.114129  644414 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:10:01.149515  644414 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:10:01.149650  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.214815  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.203630276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.214940  644414 docker.go:319] overlay module found
	I1115 11:10:01.218203  644414 out.go:179] * Using the docker driver based on existing profile
	I1115 11:10:01.222067  644414 start.go:309] selected driver: docker
	I1115 11:10:01.222095  644414 start.go:930] validating driver "docker" against &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.222249  644414 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:10:01.222374  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.290199  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.272152631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.290633  644414 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:10:01.290666  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:01.290735  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:01.290785  644414 start.go:353] cluster config:
	{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.295923  644414 out.go:179] * Starting "ha-439113" primary control-plane node in "ha-439113" cluster
	I1115 11:10:01.298854  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:01.301829  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:01.304672  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:01.304725  644414 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:10:01.304736  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:01.304766  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:01.304826  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:01.304837  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:01.305022  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.325510  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:01.325535  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:01.325557  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:01.325582  644414 start.go:360] acquireMachinesLock for ha-439113: {Name:mk8f5fddf42cbee62c5cd775824daee5e174c730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:01.325648  644414 start.go:364] duration metric: took 38.851µs to acquireMachinesLock for "ha-439113"
	I1115 11:10:01.325671  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:01.325676  644414 fix.go:54] fixHost starting: 
	I1115 11:10:01.325927  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.343552  644414 fix.go:112] recreateIfNeeded on ha-439113: state=Stopped err=<nil>
	W1115 11:10:01.343585  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:01.346902  644414 out.go:252] * Restarting existing docker container for "ha-439113" ...
	I1115 11:10:01.347040  644414 cli_runner.go:164] Run: docker start ha-439113
	I1115 11:10:01.611121  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.630743  644414 kic.go:430] container "ha-439113" state is running.
	I1115 11:10:01.631322  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:01.657614  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.657847  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:01.657906  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:01.682277  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:01.682596  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:01.682604  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:01.683536  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:10:04.832447  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:04.832472  644414 ubuntu.go:182] provisioning hostname "ha-439113"
	I1115 11:10:04.832543  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:04.850661  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:04.850981  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:04.850997  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113 && echo "ha-439113" | sudo tee /etc/hostname
	I1115 11:10:05.019162  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:05.019373  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:05.040944  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:05.041275  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:05.041312  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:05.193601  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:05.193631  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:05.193651  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:05.193661  644414 provision.go:84] configureAuth start
	I1115 11:10:05.193734  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:05.211992  644414 provision.go:143] copyHostCerts
	I1115 11:10:05.212041  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212076  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:05.212095  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212172  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:05.212264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212287  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:05.212292  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212324  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:05.212370  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212391  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:05.212398  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212423  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:05.212513  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113 san=[127.0.0.1 192.168.49.2 ha-439113 localhost minikube]
	I1115 11:10:06.070863  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:06.070938  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:06.071014  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.090345  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.196902  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:06.196968  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:06.216309  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:06.216383  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1115 11:10:06.234832  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:06.234898  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:06.252396  644414 provision.go:87] duration metric: took 1.058711326s to configureAuth
	I1115 11:10:06.252465  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:06.252742  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:06.252850  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.270036  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:06.270362  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:06.270383  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:06.614480  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:06.614501  644414 machine.go:97] duration metric: took 4.956644455s to provisionDockerMachine
	I1115 11:10:06.614512  644414 start.go:293] postStartSetup for "ha-439113" (driver="docker")
	I1115 11:10:06.614523  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:06.614593  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:06.614633  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.635190  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.741143  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:06.744492  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:06.744522  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:06.744534  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:06.744591  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:06.744682  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:06.744693  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:06.744792  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:06.752206  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:06.769623  644414 start.go:296] duration metric: took 155.096546ms for postStartSetup
	I1115 11:10:06.769735  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:06.769797  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.786747  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.889967  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:06.894381  644414 fix.go:56] duration metric: took 5.56869817s for fixHost
	I1115 11:10:06.894404  644414 start.go:83] releasing machines lock for "ha-439113", held for 5.568743749s
	I1115 11:10:06.894468  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:06.912478  644414 ssh_runner.go:195] Run: cat /version.json
	I1115 11:10:06.912503  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:06.912549  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.912557  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.935963  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.943189  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:07.140607  644414 ssh_runner.go:195] Run: systemctl --version
	I1115 11:10:07.147286  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:07.181632  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:07.186178  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:07.186315  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:07.194727  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:07.194754  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:07.194787  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:07.194836  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:07.211038  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:07.228463  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:07.228531  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:07.245230  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:07.259066  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:07.400677  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:07.528374  644414 docker.go:234] disabling docker service ...
	I1115 11:10:07.528452  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:07.544386  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:07.557994  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:07.673355  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:07.789554  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:07.802473  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:07.816520  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:07.816638  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.825590  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:07.825753  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.834624  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.843465  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.852151  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:07.860174  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.869179  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.877916  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.886986  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:07.894890  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:07.902588  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.022572  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:10:08.143861  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:10:08.144001  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:10:08.148082  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:10:08.148187  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:10:08.151776  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:10:08.176109  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:10:08.176190  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.206377  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.246152  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:10:08.249013  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:10:08.265246  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:10:08.269229  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.279381  644414 kubeadm.go:884] updating cluster {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:10:08.279538  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:08.279594  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.313662  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.313686  644414 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:10:08.313742  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.341156  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.341180  644414 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:10:08.341189  644414 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 11:10:08.341297  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:10:08.341383  644414 ssh_runner.go:195] Run: crio config
	I1115 11:10:08.417323  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:08.417346  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:08.417367  644414 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:10:08.417391  644414 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-439113 NodeName:ha-439113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:10:08.417529  644414 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-439113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:10:08.417554  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:10:08.417612  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:10:08.429604  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:08.429765  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:10:08.429836  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:10:08.437846  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:10:08.437927  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 11:10:08.445900  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 11:10:08.459668  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:10:08.472428  644414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1115 11:10:08.485415  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:10:08.498516  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:10:08.502240  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.512200  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.622281  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:10:08.654146  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.2
	I1115 11:10:08.654177  644414 certs.go:195] generating shared ca certs ...
	I1115 11:10:08.654195  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:08.654338  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:10:08.654393  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:10:08.654406  644414 certs.go:257] generating profile certs ...
	I1115 11:10:08.654496  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:10:08.654531  644414 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423
	I1115 11:10:08.654549  644414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1115 11:10:09.275584  644414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 ...
	I1115 11:10:09.275661  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423: {Name:mkcc7bf2bc49672369082197c2ea205c3b413e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.275872  644414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 ...
	I1115 11:10:09.275912  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423: {Name:mkddc44bc05ba35828280547efe210b00108cabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.276063  644414 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 11:10:09.276243  644414 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 11:10:09.276437  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:10:09.276473  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:10:09.276509  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:10:09.276554  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:10:09.276590  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:10:09.276617  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:10:09.276659  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:10:09.276698  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:10:09.276726  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:10:09.276806  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:10:09.276885  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:10:09.276915  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:10:09.276959  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:10:09.277013  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:10:09.277057  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:10:09.277153  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:09.277220  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.277264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.277297  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.277887  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:10:09.296564  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:10:09.314781  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:10:09.335633  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:10:09.353146  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:10:09.370859  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:10:09.388232  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:10:09.410774  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:10:09.439944  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:10:09.477014  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:10:09.526226  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:10:09.559717  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:10:09.610930  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:10:09.623460  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:10:09.643972  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.652807  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.653014  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.741237  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:10:09.749901  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:10:09.767184  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774726  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774846  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.838136  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:10:09.846476  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:10:09.890099  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895038  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895102  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.961757  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:10:09.976918  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:10:09.985687  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:10:10.033177  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:10:10.079291  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:10:10.125057  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:10:10.168941  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:10:10.219261  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:10:10.289307  644414 kubeadm.go:401] StartCluster: {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:10.289486  644414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:10:10.289574  644414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:10:10.354477  644414 cri.go:89] found id: "ab0d0c34b46d585c39a39112a9d96382b3c2d54b036b01e5aabb4c9adb26fe48"
	I1115 11:10:10.354514  644414 cri.go:89] found id: "f5462600e253c742d103a09b518cadafb5354c9b674147e2394344fc4f6cdd17"
	I1115 11:10:10.354519  644414 cri.go:89] found id: "c9aa769ac1e410d0690ad31ea1ef812bb7de4c70e937d471392caf66737a2862"
	I1115 11:10:10.354523  644414 cri.go:89] found id: "49f53dedd4e32694c1de85010bf005f40b10dfe1e581005787ce4f5229936764"
	I1115 11:10:10.354526  644414 cri.go:89] found id: "e0b918dd4970fd4deab2473f719156caad36c70e91836ec9407fd62c0e66c2f1"
	I1115 11:10:10.354530  644414 cri.go:89] found id: ""
	I1115 11:10:10.354587  644414 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:10:10.370661  644414 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:10:10Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:10:10.370748  644414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:10:10.382258  644414 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:10:10.382296  644414 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:10:10.382347  644414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:10:10.390626  644414 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:10.391102  644414 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-439113" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.391230  644414 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "ha-439113" cluster setting kubeconfig missing "ha-439113" context setting]
	I1115 11:10:10.391547  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.392161  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:10:10.393236  644414 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 11:10:10.393317  644414 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 11:10:10.393332  644414 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 11:10:10.393338  644414 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 11:10:10.393347  644414 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 11:10:10.393352  644414 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 11:10:10.394951  644414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:10:10.405841  644414 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1115 11:10:10.405873  644414 kubeadm.go:602] duration metric: took 23.570972ms to restartPrimaryControlPlane
	I1115 11:10:10.405883  644414 kubeadm.go:403] duration metric: took 116.586705ms to StartCluster
	I1115 11:10:10.405898  644414 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.405969  644414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.406686  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.406905  644414 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:10:10.406942  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:10:10.406961  644414 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:10:10.407533  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.412935  644414 out.go:179] * Enabled addons: 
	I1115 11:10:10.415804  644414 addons.go:515] duration metric: took 8.829529ms for enable addons: enabled=[]
	I1115 11:10:10.415842  644414 start.go:247] waiting for cluster config update ...
	I1115 11:10:10.415858  644414 start.go:256] writing updated cluster config ...
	I1115 11:10:10.419060  644414 out.go:203] 
	I1115 11:10:10.422348  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.422466  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.425867  644414 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	I1115 11:10:10.428658  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:10.431470  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:10.434231  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:10.434251  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:10.434373  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:10.434390  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:10.434509  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.434718  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:10.459579  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:10.459605  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:10.459619  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:10.459645  644414 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:10.459703  644414 start.go:364] duration metric: took 38.917µs to acquireMachinesLock for "ha-439113-m02"
	I1115 11:10:10.459726  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:10.459732  644414 fix.go:54] fixHost starting: m02
	I1115 11:10:10.460001  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.490667  644414 fix.go:112] recreateIfNeeded on ha-439113-m02: state=Stopped err=<nil>
	W1115 11:10:10.490698  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:10.494022  644414 out.go:252] * Restarting existing docker container for "ha-439113-m02" ...
	I1115 11:10:10.494103  644414 cli_runner.go:164] Run: docker start ha-439113-m02
	I1115 11:10:10.848234  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.876991  644414 kic.go:430] container "ha-439113-m02" state is running.
	I1115 11:10:10.877372  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:10.907598  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.907880  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:10.907948  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:10.946130  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:10.946438  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:10.946448  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:10.947277  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60346->127.0.0.1:33574: read: connection reset by peer
	I1115 11:10:14.161070  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.161137  644414 ubuntu.go:182] provisioning hostname "ha-439113-m02"
	I1115 11:10:14.161234  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.193112  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.193410  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.193421  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
	I1115 11:10:14.414884  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.415071  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.441593  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.441897  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.441920  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:14.655329  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:14.655419  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:14.655450  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:14.655485  644414 provision.go:84] configureAuth start
	I1115 11:10:14.655584  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:14.684954  644414 provision.go:143] copyHostCerts
	I1115 11:10:14.684996  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685029  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:14.685035  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685109  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:14.685187  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685203  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:14.685208  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685233  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:14.685270  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685286  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:14.685290  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685314  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:14.685358  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
	I1115 11:10:15.164962  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:15.165087  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:15.165161  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.183565  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:15.309845  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:15.309910  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:15.352565  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:15.352638  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:10:15.389073  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:15.389137  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:15.436657  644414 provision.go:87] duration metric: took 781.140009ms to configureAuth
	I1115 11:10:15.436685  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:15.436943  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:15.437049  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.467485  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:15.467817  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:15.467839  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:16.972469  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:16.972493  644414 machine.go:97] duration metric: took 6.064595432s to provisionDockerMachine
	I1115 11:10:16.972505  644414 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
	I1115 11:10:16.972515  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:16.972579  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:16.972636  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.011353  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.141531  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:17.145724  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:17.145750  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:17.145761  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:17.145819  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:17.145893  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:17.145901  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:17.146000  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:17.153864  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:17.175408  644414 start.go:296] duration metric: took 202.888277ms for postStartSetup
	I1115 11:10:17.175529  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:17.175603  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.202540  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.314494  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:17.322089  644414 fix.go:56] duration metric: took 6.862349383s for fixHost
	I1115 11:10:17.322116  644414 start.go:83] releasing machines lock for "ha-439113-m02", held for 6.862399853s
	I1115 11:10:17.322193  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:17.346984  644414 out.go:179] * Found network options:
	I1115 11:10:17.349992  644414 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 11:10:17.357013  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:10:17.357074  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:10:17.357145  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:17.357204  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.357473  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:17.357528  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.392713  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.393588  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.599074  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:17.766809  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:17.766905  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:17.789163  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:17.789191  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:17.789231  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:17.789289  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:17.815110  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:17.838070  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:17.838143  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:17.860257  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:17.879590  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:18.110145  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:18.361820  644414 docker.go:234] disabling docker service ...
	I1115 11:10:18.361900  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:18.384569  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:18.416731  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:18.641786  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:18.837399  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:18.857492  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:18.878074  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:18.878149  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.894400  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:18.894493  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.905139  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.919066  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.934192  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:18.947793  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.962215  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.975913  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.990422  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:19.001078  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:19.010948  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:19.243052  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:11:49.588377  644414 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345288768s)
	I1115 11:11:49.588399  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:11:49.588453  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:11:49.592631  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:11:49.592694  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:11:49.596673  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:11:49.627565  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:11:49.627655  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.657574  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.692786  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:11:49.695732  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:11:49.698667  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:11:49.715635  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:11:49.719827  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:49.729557  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:11:49.729790  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:49.730057  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:11:49.747197  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:11:49.747477  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
	I1115 11:11:49.747492  644414 certs.go:195] generating shared ca certs ...
	I1115 11:11:49.747509  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:11:49.747651  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:11:49.747712  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:11:49.747723  644414 certs.go:257] generating profile certs ...
	I1115 11:11:49.747793  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:11:49.747854  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8
	I1115 11:11:49.747896  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:11:49.747908  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:11:49.747922  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:11:49.747939  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:11:49.747953  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:11:49.747968  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:11:49.747979  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:11:49.747995  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:11:49.748005  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:11:49.748058  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:11:49.748100  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:11:49.748113  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:11:49.748139  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:11:49.748172  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:11:49.748196  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:11:49.748244  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:11:49.748274  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:11:49.748290  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:11:49.748302  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:49.748361  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:11:49.766640  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:11:49.865171  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 11:11:49.869248  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 11:11:49.877385  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 11:11:49.881661  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 11:11:49.890592  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 11:11:49.894372  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 11:11:49.902879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 11:11:49.906594  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 11:11:49.914879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 11:11:49.918911  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 11:11:49.928251  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 11:11:49.931713  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 11:11:49.939808  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:11:49.959417  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:11:49.979171  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:11:49.999374  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:11:50.034447  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:11:50.055956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:11:50.075858  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:11:50.096569  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:11:50.123534  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:11:50.145099  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:11:50.165838  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:11:50.187631  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 11:11:50.201727  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 11:11:50.215561  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 11:11:50.228704  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 11:11:50.243716  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 11:11:50.256646  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 11:11:50.274083  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 11:11:50.289451  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:11:50.296096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:11:50.304816  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308605  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308696  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.349933  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:11:50.357859  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:11:50.366131  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370090  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370184  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.411529  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:11:50.419530  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:11:50.428122  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.431990  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.432078  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.473336  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:11:50.481905  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:11:50.485884  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:11:50.529145  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:11:50.575458  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:11:50.618147  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:11:50.660345  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:11:50.701441  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:11:50.742918  644414 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 11:11:50.743050  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:11:50.743086  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:11:50.743137  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:11:50.756533  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:11:50.756661  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:11:50.756809  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:11:50.766452  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:11:50.766519  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 11:11:50.774299  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:11:50.787555  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:11:50.801348  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:11:50.815426  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:11:50.819361  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:50.829846  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:50.971817  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:50.986595  644414 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:11:50.987008  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:50.990541  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:11:50.993289  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:51.129111  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:51.143975  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:11:51.144052  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:11:51.144377  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175109  644414 node_ready.go:49] node "ha-439113-m02" is "Ready"
	I1115 11:11:54.175142  644414 node_ready.go:38] duration metric: took 3.030741263s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175156  644414 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:11:54.175217  644414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:11:54.191139  644414 api_server.go:72] duration metric: took 3.204498804s to wait for apiserver process to appear ...
	I1115 11:11:54.191165  644414 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:11:54.191183  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.270987  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 11:11:54.271020  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 11:11:54.691298  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.702970  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:54.703005  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.191248  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.208784  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.208820  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.691283  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.701010  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.701040  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.191695  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.205744  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:56.205779  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.691307  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.703521  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 11:11:56.706435  644414 api_server.go:141] control plane version: v1.34.1
	I1115 11:11:56.706475  644414 api_server.go:131] duration metric: took 2.515302396s to wait for apiserver health ...
	I1115 11:11:56.706484  644414 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:11:56.718211  644414 system_pods.go:59] 26 kube-system pods found
	I1115 11:11:56.718249  644414 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718259  644414 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718265  644414 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.718282  644414 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.718287  644414 system_pods.go:61] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.718291  644414 system_pods.go:61] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.718295  644414 system_pods.go:61] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.718299  644414 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.718305  644414 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.718316  644414 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.718322  644414 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.718327  644414 system_pods.go:61] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.718337  644414 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.718352  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.718361  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.718366  644414 system_pods.go:61] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.718373  644414 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.718384  644414 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.718389  644414 system_pods.go:61] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.718395  644414 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.718405  644414 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.718410  644414 system_pods.go:61] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.718414  644414 system_pods.go:61] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.718426  644414 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.718432  644414 system_pods.go:61] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.718438  644414 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.718444  644414 system_pods.go:74] duration metric: took 11.954415ms to wait for pod list to return data ...
	I1115 11:11:56.718453  644414 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:11:56.724493  644414 default_sa.go:45] found service account: "default"
	I1115 11:11:56.724536  644414 default_sa.go:55] duration metric: took 6.072136ms for default service account to be created ...
	I1115 11:11:56.724547  644414 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:11:56.819602  644414 system_pods.go:86] 26 kube-system pods found
	I1115 11:11:56.819647  644414 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819658  644414 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819664  644414 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.819670  644414 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.819674  644414 system_pods.go:89] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.819679  644414 system_pods.go:89] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.819694  644414 system_pods.go:89] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.819703  644414 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.819711  644414 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.819721  644414 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.819726  644414 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.819730  644414 system_pods.go:89] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.819738  644414 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.819747  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.819752  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.819756  644414 system_pods.go:89] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.819770  644414 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.819778  644414 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.819783  644414 system_pods.go:89] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.819789  644414 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.819797  644414 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.819803  644414 system_pods.go:89] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.819811  644414 system_pods.go:89] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.819815  644414 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.819819  644414 system_pods.go:89] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.819824  644414 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.819841  644414 system_pods.go:126] duration metric: took 95.282586ms to wait for k8s-apps to be running ...
	I1115 11:11:56.819854  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:11:56.819918  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:11:56.837030  644414 system_svc.go:56] duration metric: took 17.155047ms WaitForService to wait for kubelet
	I1115 11:11:56.837061  644414 kubeadm.go:587] duration metric: took 5.85042521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:11:56.837082  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:11:56.841207  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841239  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841253  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841257  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841262  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841265  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841282  644414 node_conditions.go:105] duration metric: took 4.194343ms to run NodePressure ...
	I1115 11:11:56.841300  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:11:56.841324  644414 start.go:256] writing updated cluster config ...
	I1115 11:11:56.844944  644414 out.go:203] 
	I1115 11:11:56.848069  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:56.848191  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.851562  644414 out.go:179] * Starting "ha-439113-m04" worker node in "ha-439113" cluster
	I1115 11:11:56.855417  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:11:56.858314  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:11:56.861196  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:11:56.861243  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:11:56.861453  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:11:56.861539  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:11:56.861554  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:11:56.861725  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.894239  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:11:56.894262  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:11:56.894277  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:11:56.894301  644414 start.go:360] acquireMachinesLock for ha-439113-m04: {Name:mke6e857e5b25fb7a1d96f7fe08934c7b44258f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:11:56.894360  644414 start.go:364] duration metric: took 38.252µs to acquireMachinesLock for "ha-439113-m04"
	I1115 11:11:56.894384  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:11:56.894391  644414 fix.go:54] fixHost starting: m04
	I1115 11:11:56.894639  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:56.934538  644414 fix.go:112] recreateIfNeeded on ha-439113-m04: state=Stopped err=<nil>
	W1115 11:11:56.934571  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:11:56.937723  644414 out.go:252] * Restarting existing docker container for "ha-439113-m04" ...
	I1115 11:11:56.937813  644414 cli_runner.go:164] Run: docker start ha-439113-m04
	I1115 11:11:57.292353  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:57.320590  644414 kic.go:430] container "ha-439113-m04" state is running.
	I1115 11:11:57.320978  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:11:57.343942  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:57.344181  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:11:57.344243  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:11:57.365933  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:11:57.366241  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:11:57.366255  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:11:57.366995  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:12:00.666212  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.666285  644414 ubuntu.go:182] provisioning hostname "ha-439113-m04"
	I1115 11:12:00.666399  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.703141  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.703457  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.703468  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m04 && echo "ha-439113-m04" | sudo tee /etc/hostname
	I1115 11:12:00.898855  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.898950  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.948730  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.949093  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.949120  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:12:01.162002  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:12:01.162071  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:12:01.162106  644414 ubuntu.go:190] setting up certificates
	I1115 11:12:01.162147  644414 provision.go:84] configureAuth start
	I1115 11:12:01.162228  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:01.189297  644414 provision.go:143] copyHostCerts
	I1115 11:12:01.189345  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189381  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:12:01.189387  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189469  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:12:01.189552  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189569  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:12:01.189574  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189602  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:12:01.189643  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189658  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:12:01.189662  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189686  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:12:01.189732  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m04 san=[127.0.0.1 192.168.49.5 ha-439113-m04 localhost minikube]
	I1115 11:12:01.793644  644414 provision.go:177] copyRemoteCerts
	I1115 11:12:01.793724  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:12:01.793769  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:01.813786  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:01.932159  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:12:01.932221  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:12:01.959503  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:12:01.959565  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:12:01.985894  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:12:01.985956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:12:02.016893  644414 provision.go:87] duration metric: took 854.716001ms to configureAuth
	I1115 11:12:02.016972  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:12:02.017324  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:02.017494  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.042340  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:02.042641  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:02.042657  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:12:02.421793  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:12:02.421855  644414 machine.go:97] duration metric: took 5.077657106s to provisionDockerMachine
	I1115 11:12:02.421891  644414 start.go:293] postStartSetup for "ha-439113-m04" (driver="docker")
	I1115 11:12:02.421937  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:12:02.422045  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:12:02.422113  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.441735  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.549972  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:12:02.553292  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:12:02.553326  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:12:02.553339  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:12:02.553398  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:12:02.553481  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:12:02.553492  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:12:02.553591  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:12:02.561640  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:02.581188  644414 start.go:296] duration metric: took 159.246745ms for postStartSetup
	I1115 11:12:02.581283  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:12:02.581334  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.598560  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.702117  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:12:02.707693  644414 fix.go:56] duration metric: took 5.813294693s for fixHost
	I1115 11:12:02.707719  644414 start.go:83] releasing machines lock for "ha-439113-m04", held for 5.813345581s
	I1115 11:12:02.707815  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:02.727805  644414 out.go:179] * Found network options:
	I1115 11:12:02.730701  644414 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 11:12:02.733528  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733564  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733599  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733615  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:12:02.733685  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:12:02.733735  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.734056  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:12:02.734115  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.762180  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.770444  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.906742  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:12:02.982777  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:12:02.982870  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:12:02.991311  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:12:02.991334  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:12:02.991372  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:12:02.991426  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:12:03.010259  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:12:03.026209  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:12:03.026295  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:12:03.042235  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:12:03.056541  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:12:03.207440  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:12:03.335536  644414 docker.go:234] disabling docker service ...
	I1115 11:12:03.335651  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:12:03.353883  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:12:03.369431  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:12:03.486211  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:12:03.610710  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:12:03.625360  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:12:03.641312  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:12:03.641378  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.651264  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:12:03.651338  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.665109  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.675589  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.686503  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:12:03.694865  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.705871  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.714726  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.723852  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:12:03.731853  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:12:03.740511  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:03.853255  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:12:04.003040  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:12:04.003163  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:12:04.007573  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:12:04.007728  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:12:04.014385  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:12:04.042291  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:12:04.042400  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.076162  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.110265  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:12:04.113250  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:12:04.116130  644414 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 11:12:04.118985  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:12:04.135746  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:12:04.140419  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.151141  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:12:04.151383  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.151632  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:12:04.169829  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:12:04.170121  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.5
	I1115 11:12:04.170137  644414 certs.go:195] generating shared ca certs ...
	I1115 11:12:04.170152  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:12:04.170287  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:12:04.170332  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:12:04.170347  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:12:04.170362  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:12:04.170377  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:12:04.170392  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:12:04.170455  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:12:04.170489  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:12:04.170502  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:12:04.170528  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:12:04.170554  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:12:04.170579  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:12:04.170625  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:04.170653  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.170666  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.170682  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.170703  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:12:04.192999  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:12:04.214491  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:12:04.238386  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:12:04.261791  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:12:04.282186  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:12:04.301663  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:12:04.323494  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:12:04.330506  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:12:04.339641  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343359  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343471  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.384944  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:12:04.393726  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:12:04.401885  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405917  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405984  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.448096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:12:04.456341  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:12:04.464809  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469548  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469657  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.512809  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:12:04.521564  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:12:04.525477  644414 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:12:04.525571  644414 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1115 11:12:04.525671  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:12:04.525750  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:12:04.534631  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:12:04.534732  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1115 11:12:04.542762  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:12:04.555474  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:12:04.568549  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:12:04.572246  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.582645  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.720397  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.734431  644414 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1115 11:12:04.734793  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.737605  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:12:04.740524  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.870273  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.886167  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:12:04.886294  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:12:04.886567  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890505  644414 node_ready.go:49] node "ha-439113-m04" is "Ready"
	I1115 11:12:04.890532  644414 node_ready.go:38] duration metric: took 3.920221ms for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890569  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:12:04.890627  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:12:04.906249  644414 system_svc.go:56] duration metric: took 15.693042ms WaitForService to wait for kubelet
	I1115 11:12:04.906349  644414 kubeadm.go:587] duration metric: took 171.724556ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:12:04.906397  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:12:04.916259  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916376  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916421  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916457  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916477  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916512  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916538  644414 node_conditions.go:105] duration metric: took 10.120472ms to run NodePressure ...
	I1115 11:12:04.916592  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:12:04.916629  644414 start.go:256] writing updated cluster config ...
	I1115 11:12:04.917071  644414 ssh_runner.go:195] Run: rm -f paused
	I1115 11:12:04.922331  644414 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:12:04.922989  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:12:04.955742  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:12:06.963336  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:08.980310  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:11.479328  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:13.964446  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:16.463626  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:18.465383  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:20.962686  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:22.964048  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:24.966447  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:27.463942  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:29.466713  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	I1115 11:12:30.462795  644414 pod_ready.go:94] pod "coredns-66bc5c9577-4g6sm" is "Ready"
	I1115 11:12:30.462820  644414 pod_ready.go:86] duration metric: took 25.506978071s for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.462830  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.469415  644414 pod_ready.go:94] pod "coredns-66bc5c9577-mlm6m" is "Ready"
	I1115 11:12:30.469441  644414 pod_ready.go:86] duration metric: took 6.60411ms for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.473231  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480070  644414 pod_ready.go:94] pod "etcd-ha-439113" is "Ready"
	I1115 11:12:30.480096  644414 pod_ready.go:86] duration metric: took 6.837381ms for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480106  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486550  644414 pod_ready.go:94] pod "etcd-ha-439113-m02" is "Ready"
	I1115 11:12:30.486578  644414 pod_ready.go:86] duration metric: took 6.465838ms for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486589  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.657170  644414 request.go:683] "Waited before sending request" delay="167.271906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:30.660251  644414 pod_ready.go:99] pod "etcd-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "etcd-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:30.660271  644414 pod_ready.go:86] duration metric: took 173.674417ms for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.856532  644414 request.go:683] "Waited before sending request" delay="196.157902ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 11:12:30.862230  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.056631  644414 request.go:683] "Waited before sending request" delay="194.303781ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113"
	I1115 11:12:31.256567  644414 request.go:683] "Waited before sending request" delay="196.320457ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:31.260364  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113" is "Ready"
	I1115 11:12:31.260440  644414 pod_ready.go:86] duration metric: took 398.184225ms for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.260460  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.456733  644414 request.go:683] "Waited before sending request" delay="196.195936ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m02"
	I1115 11:12:31.657283  644414 request.go:683] "Waited before sending request" delay="189.364553ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:31.669486  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113-m02" is "Ready"
	I1115 11:12:31.669527  644414 pod_ready.go:86] duration metric: took 409.053455ms for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.669545  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.856759  644414 request.go:683] "Waited before sending request" delay="187.140315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m03"
	I1115 11:12:32.057081  644414 request.go:683] "Waited before sending request" delay="194.340659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:32.060246  644414 pod_ready.go:99] pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "kube-apiserver-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:32.060269  644414 pod_ready.go:86] duration metric: took 390.716754ms for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.256765  644414 request.go:683] "Waited before sending request" delay="196.346784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 11:12:32.260967  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.457411  644414 request.go:683] "Waited before sending request" delay="196.343854ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:32.656543  644414 request.go:683] "Waited before sending request" delay="195.259075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:32.857312  644414 request.go:683] "Waited before sending request" delay="95.237723ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:33.056759  644414 request.go:683] "Waited before sending request" delay="193.348543ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.456512  644414 request.go:683] "Waited before sending request" delay="191.213474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.857248  644414 request.go:683] "Waited before sending request" delay="92.163849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	W1115 11:12:34.268915  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:36.769187  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:38.769594  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:40.775431  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:43.268655  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	I1115 11:12:45.275032  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113" is "Ready"
	I1115 11:12:45.275075  644414 pod_ready.go:86] duration metric: took 13.01407493s for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.275087  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305482  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m02" is "Ready"
	I1115 11:12:45.305509  644414 pod_ready.go:86] duration metric: took 30.414418ms for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305520  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.308592  644414 pod_ready.go:99] pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace is gone: getting pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace (will retry): pods "kube-controller-manager-ha-439113-m03" not found
	I1115 11:12:45.308616  644414 pod_ready.go:86] duration metric: took 3.088777ms for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.312595  644414 pod_ready.go:83] waiting for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319584  644414 pod_ready.go:94] pod "kube-proxy-2fgtm" is "Ready"
	I1115 11:12:45.319658  644414 pod_ready.go:86] duration metric: took 6.96691ms for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319684  644414 pod_ready.go:83] waiting for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333364  644414 pod_ready.go:94] pod "kube-proxy-k7bcn" is "Ready"
	I1115 11:12:45.333446  644414 pod_ready.go:86] duration metric: took 13.743575ms for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333472  644414 pod_ready.go:83] waiting for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.461841  644414 request.go:683] "Waited before sending request" delay="128.26876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgftx"
	I1115 11:12:45.662133  644414 request.go:683] "Waited before sending request" delay="196.336603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:45.666231  644414 pod_ready.go:94] pod "kube-proxy-kgftx" is "Ready"
	I1115 11:12:45.666259  644414 pod_ready.go:86] duration metric: took 332.766862ms for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.862402  644414 request.go:683] "Waited before sending request" delay="196.047882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1115 11:12:45.868100  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.061503  644414 request.go:683] "Waited before sending request" delay="193.299208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113"
	I1115 11:12:46.262349  644414 request.go:683] "Waited before sending request" delay="196.337092ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:46.266390  644414 pod_ready.go:94] pod "kube-scheduler-ha-439113" is "Ready"
	I1115 11:12:46.266415  644414 pod_ready.go:86] duration metric: took 398.289218ms for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.266426  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.461857  644414 request.go:683] "Waited before sending request" delay="195.354736ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:46.662164  644414 request.go:683] "Waited before sending request" delay="196.315389ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:46.862451  644414 request.go:683] "Waited before sending request" delay="95.198714ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:47.062064  644414 request.go:683] "Waited before sending request" delay="194.32444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.462004  644414 request.go:683] "Waited before sending request" delay="191.259764ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.862129  644414 request.go:683] "Waited before sending request" delay="91.206426ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	W1115 11:12:48.273067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:50.273503  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:52.273873  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:54.774253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:56.774741  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:59.273054  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:01.273531  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:03.274007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:05.773995  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:08.274070  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:10.774950  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:13.273142  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:15.774523  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:18.275146  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:20.775066  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:23.273644  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:25.772983  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:27.773086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:29.774439  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:32.274282  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:34.773274  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:36.774007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:38.774499  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:41.272920  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:43.272980  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:45.290069  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:47.774370  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:49.775099  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:52.273471  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:54.774040  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:56.776828  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:58.777477  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:01.274086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:03.774603  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:06.274270  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:08.776333  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:11.274406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:13.775288  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:16.274470  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:18.774609  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:21.275329  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:23.773704  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:25.781356  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:28.273802  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:30.773867  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:33.273730  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:35.274388  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:37.774988  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:40.273650  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:42.274574  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:44.775136  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:47.273253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:49.774129  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:52.274209  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:54.773957  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:56.774057  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:58.774103  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:00.794798  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:03.273466  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:05.274892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:07.773906  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:09.775150  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:12.274372  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:14.773892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:16.774210  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:19.273576  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:21.773796  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:24.273997  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:26.274175  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:28.775134  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:31.275044  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:33.773408  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:35.774067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:37.774322  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:40.273391  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:42.275088  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:44.773835  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:46.773944  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:49.273345  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:51.274206  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:53.275406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:55.276298  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:57.773509  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:59.773622  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:01.773991  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:04.273687  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	I1115 11:16:04.922792  644414 pod_ready.go:86] duration metric: took 3m18.656348919s for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:16:04.922828  644414 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1115 11:16:04.922844  644414 pod_ready.go:40] duration metric: took 4m0.000432421s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:16:04.926118  644414 out.go:203] 
	W1115 11:16:04.928902  644414 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1115 11:16:04.931693  644414 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-439113 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-439113
helpers_test.go:243: (dbg) docker inspect ha-439113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	        "Created": "2025-11-15T10:52:17.169946413Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:10:01.380531105Z",
	            "FinishedAt": "2025-11-15T11:10:00.266325121Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hosts",
	        "LogPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc-json.log",
	        "Name": "/ha-439113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-439113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-439113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	                "LowerDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-439113",
	                "Source": "/var/lib/docker/volumes/ha-439113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-439113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-439113",
	                "name.minikube.sigs.k8s.io": "ha-439113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1552653af76d6dd7c6162ea9f89df1884eadd013a674c8ab945e116cac5292c2",
	            "SandboxKey": "/var/run/docker/netns/1552653af76d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33570"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33573"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33571"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33572"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-439113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:f1:61:d7:6f:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70b4341e58399e11a79033573f4328a7d843f08aeced339b6115cf0c5d327007",
	                    "EndpointID": "ecb9ec3e068adfb90b6cea007bf9d7996cf48ef1255455853c88ec25ad196b03",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-439113",
	                        "d546a4fc19d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-439113 -n ha-439113
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 logs -n 25: (1.449175375s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m04:/home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp testdata/cp-test.txt ha-439113-m04:/home/docker/cp-test.txt                                                             │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m04.txt │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m04_ha-439113.txt                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113.txt                                                 │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m02 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ node    │ ha-439113 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:58 UTC │
	│ node    │ ha-439113 node start m02 --alsologtostderr -v 5                                                                                      │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:58 UTC │                     │
	│ node    │ ha-439113 node list --alsologtostderr -v 5                                                                                           │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:06 UTC │                     │
	│ stop    │ ha-439113 stop --alsologtostderr -v 5                                                                                                │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:06 UTC │ 15 Nov 25 11:07 UTC │
	│ start   │ ha-439113 start --wait true --alsologtostderr -v 5                                                                                   │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:07 UTC │ 15 Nov 25 11:09 UTC │
	│ node    │ ha-439113 node list --alsologtostderr -v 5                                                                                           │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │                     │
	│ node    │ ha-439113 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │ 15 Nov 25 11:09 UTC │
	│ stop    │ ha-439113 stop --alsologtostderr -v 5                                                                                                │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │ 15 Nov 25 11:10 UTC │
	│ start   │ ha-439113 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:10:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:10:01.082148  644414 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:10:01.082358  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082389  644414 out.go:374] Setting ErrFile to fd 2...
	I1115 11:10:01.082410  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082810  644414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:10:01.083841  644414 out.go:368] Setting JSON to false
	I1115 11:10:01.084783  644414 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10352,"bootTime":1763194649,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:10:01.084926  644414 start.go:143] virtualization:  
	I1115 11:10:01.088178  644414 out.go:179] * [ha-439113] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:10:01.092058  644414 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:10:01.092190  644414 notify.go:221] Checking for updates...
	I1115 11:10:01.098137  644414 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:10:01.101114  644414 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:01.104087  644414 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:10:01.107082  644414 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:10:01.110104  644414 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:10:01.113527  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:01.114129  644414 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:10:01.149515  644414 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:10:01.149650  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.214815  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.203630276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.214940  644414 docker.go:319] overlay module found
	I1115 11:10:01.218203  644414 out.go:179] * Using the docker driver based on existing profile
	I1115 11:10:01.222067  644414 start.go:309] selected driver: docker
	I1115 11:10:01.222095  644414 start.go:930] validating driver "docker" against &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.222249  644414 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:10:01.222374  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.290199  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.272152631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.290633  644414 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:10:01.290666  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:01.290735  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:01.290785  644414 start.go:353] cluster config:
	{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.295923  644414 out.go:179] * Starting "ha-439113" primary control-plane node in "ha-439113" cluster
	I1115 11:10:01.298854  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:01.301829  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:01.304672  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:01.304725  644414 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:10:01.304736  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:01.304766  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:01.304826  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:01.304837  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:01.305022  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.325510  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:01.325535  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:01.325557  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:01.325582  644414 start.go:360] acquireMachinesLock for ha-439113: {Name:mk8f5fddf42cbee62c5cd775824daee5e174c730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:01.325648  644414 start.go:364] duration metric: took 38.851µs to acquireMachinesLock for "ha-439113"
	I1115 11:10:01.325671  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:01.325676  644414 fix.go:54] fixHost starting: 
	I1115 11:10:01.325927  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.343552  644414 fix.go:112] recreateIfNeeded on ha-439113: state=Stopped err=<nil>
	W1115 11:10:01.343585  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:01.346902  644414 out.go:252] * Restarting existing docker container for "ha-439113" ...
	I1115 11:10:01.347040  644414 cli_runner.go:164] Run: docker start ha-439113
	I1115 11:10:01.611121  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.630743  644414 kic.go:430] container "ha-439113" state is running.
	I1115 11:10:01.631322  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:01.657614  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.657847  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:01.657906  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:01.682277  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:01.682596  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:01.682604  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:01.683536  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:10:04.832447  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:04.832472  644414 ubuntu.go:182] provisioning hostname "ha-439113"
	I1115 11:10:04.832543  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:04.850661  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:04.850981  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:04.850997  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113 && echo "ha-439113" | sudo tee /etc/hostname
	I1115 11:10:05.019162  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:05.019373  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:05.040944  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:05.041275  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:05.041312  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:05.193601  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:05.193631  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:05.193651  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:05.193661  644414 provision.go:84] configureAuth start
	I1115 11:10:05.193734  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:05.211992  644414 provision.go:143] copyHostCerts
	I1115 11:10:05.212041  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212076  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:05.212095  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212172  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:05.212264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212287  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:05.212292  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212324  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:05.212370  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212391  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:05.212398  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212423  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:05.212513  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113 san=[127.0.0.1 192.168.49.2 ha-439113 localhost minikube]
	I1115 11:10:06.070863  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:06.070938  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:06.071014  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.090345  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.196902  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:06.196968  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:06.216309  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:06.216383  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1115 11:10:06.234832  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:06.234898  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:06.252396  644414 provision.go:87] duration metric: took 1.058711326s to configureAuth
	I1115 11:10:06.252465  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:06.252742  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:06.252850  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.270036  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:06.270362  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:06.270383  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:06.614480  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:06.614501  644414 machine.go:97] duration metric: took 4.956644455s to provisionDockerMachine
	I1115 11:10:06.614512  644414 start.go:293] postStartSetup for "ha-439113" (driver="docker")
	I1115 11:10:06.614523  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:06.614593  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:06.614633  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.635190  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.741143  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:06.744492  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:06.744522  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:06.744534  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:06.744591  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:06.744682  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:06.744693  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:06.744792  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:06.752206  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:06.769623  644414 start.go:296] duration metric: took 155.096546ms for postStartSetup
	I1115 11:10:06.769735  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:06.769797  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.786747  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.889967  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:06.894381  644414 fix.go:56] duration metric: took 5.56869817s for fixHost
	I1115 11:10:06.894404  644414 start.go:83] releasing machines lock for "ha-439113", held for 5.568743749s
	I1115 11:10:06.894468  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:06.912478  644414 ssh_runner.go:195] Run: cat /version.json
	I1115 11:10:06.912503  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:06.912549  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.912557  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.935963  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.943189  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:07.140607  644414 ssh_runner.go:195] Run: systemctl --version
	I1115 11:10:07.147286  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:07.181632  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:07.186178  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:07.186315  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:07.194727  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:07.194754  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:07.194787  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:07.194836  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:07.211038  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:07.228463  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:07.228531  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:07.245230  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:07.259066  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:07.400677  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:07.528374  644414 docker.go:234] disabling docker service ...
	I1115 11:10:07.528452  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:07.544386  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:07.557994  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:07.673355  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:07.789554  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:07.802473  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:07.816520  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:07.816638  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.825590  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:07.825753  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.834624  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.843465  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.852151  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:07.860174  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.869179  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.877916  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.886986  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:07.894890  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:07.902588  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.022572  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:10:08.143861  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:10:08.144001  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:10:08.148082  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:10:08.148187  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:10:08.151776  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:10:08.176109  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:10:08.176190  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.206377  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.246152  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:10:08.249013  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:10:08.265246  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:10:08.269229  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.279381  644414 kubeadm.go:884] updating cluster {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:10:08.279538  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:08.279594  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.313662  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.313686  644414 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:10:08.313742  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.341156  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.341180  644414 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:10:08.341189  644414 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 11:10:08.341297  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:10:08.341383  644414 ssh_runner.go:195] Run: crio config
	I1115 11:10:08.417323  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:08.417346  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:08.417367  644414 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:10:08.417391  644414 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-439113 NodeName:ha-439113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:10:08.417529  644414 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-439113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:10:08.417554  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:10:08.417612  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:10:08.429604  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:08.429765  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:10:08.429836  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:10:08.437846  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:10:08.437927  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 11:10:08.445900  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 11:10:08.459668  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:10:08.472428  644414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1115 11:10:08.485415  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:10:08.498516  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:10:08.502240  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.512200  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.622281  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:10:08.654146  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.2
	I1115 11:10:08.654177  644414 certs.go:195] generating shared ca certs ...
	I1115 11:10:08.654195  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:08.654338  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:10:08.654393  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:10:08.654406  644414 certs.go:257] generating profile certs ...
	I1115 11:10:08.654496  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:10:08.654531  644414 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423
	I1115 11:10:08.654549  644414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1115 11:10:09.275584  644414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 ...
	I1115 11:10:09.275661  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423: {Name:mkcc7bf2bc49672369082197c2ea205c3b413e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.275872  644414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 ...
	I1115 11:10:09.275912  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423: {Name:mkddc44bc05ba35828280547efe210b00108cabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.276063  644414 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 11:10:09.276243  644414 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 11:10:09.276437  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:10:09.276473  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:10:09.276509  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:10:09.276554  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:10:09.276590  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:10:09.276617  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:10:09.276659  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:10:09.276698  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:10:09.276726  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:10:09.276806  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:10:09.276885  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:10:09.276915  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:10:09.276959  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:10:09.277013  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:10:09.277057  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:10:09.277153  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:09.277220  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.277264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.277297  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.277887  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:10:09.296564  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:10:09.314781  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:10:09.335633  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:10:09.353146  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:10:09.370859  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:10:09.388232  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:10:09.410774  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:10:09.439944  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:10:09.477014  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:10:09.526226  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:10:09.559717  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:10:09.610930  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:10:09.623460  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:10:09.643972  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.652807  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.653014  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.741237  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:10:09.749901  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:10:09.767184  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774726  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774846  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.838136  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:10:09.846476  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:10:09.890099  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895038  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895102  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.961757  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:10:09.976918  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:10:09.985687  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:10:10.033177  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:10:10.079291  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:10:10.125057  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:10:10.168941  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:10:10.219261  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:10:10.289307  644414 kubeadm.go:401] StartCluster: {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:10.289486  644414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:10:10.289574  644414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:10:10.354477  644414 cri.go:89] found id: "ab0d0c34b46d585c39a39112a9d96382b3c2d54b036b01e5aabb4c9adb26fe48"
	I1115 11:10:10.354514  644414 cri.go:89] found id: "f5462600e253c742d103a09b518cadafb5354c9b674147e2394344fc4f6cdd17"
	I1115 11:10:10.354519  644414 cri.go:89] found id: "c9aa769ac1e410d0690ad31ea1ef812bb7de4c70e937d471392caf66737a2862"
	I1115 11:10:10.354523  644414 cri.go:89] found id: "49f53dedd4e32694c1de85010bf005f40b10dfe1e581005787ce4f5229936764"
	I1115 11:10:10.354526  644414 cri.go:89] found id: "e0b918dd4970fd4deab2473f719156caad36c70e91836ec9407fd62c0e66c2f1"
	I1115 11:10:10.354530  644414 cri.go:89] found id: ""
	I1115 11:10:10.354587  644414 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:10:10.370661  644414 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:10:10Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:10:10.370748  644414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:10:10.382258  644414 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:10:10.382296  644414 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:10:10.382347  644414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:10:10.390626  644414 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:10.391102  644414 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-439113" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.391230  644414 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "ha-439113" cluster setting kubeconfig missing "ha-439113" context setting]
	I1115 11:10:10.391547  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.392161  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:10:10.393236  644414 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 11:10:10.393317  644414 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 11:10:10.393332  644414 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 11:10:10.393338  644414 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 11:10:10.393347  644414 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 11:10:10.393352  644414 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 11:10:10.394951  644414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:10:10.405841  644414 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1115 11:10:10.405873  644414 kubeadm.go:602] duration metric: took 23.570972ms to restartPrimaryControlPlane
	I1115 11:10:10.405883  644414 kubeadm.go:403] duration metric: took 116.586705ms to StartCluster
	I1115 11:10:10.405898  644414 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.405969  644414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.406686  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.406905  644414 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:10:10.406942  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:10:10.406961  644414 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:10:10.407533  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.412935  644414 out.go:179] * Enabled addons: 
	I1115 11:10:10.415804  644414 addons.go:515] duration metric: took 8.829529ms for enable addons: enabled=[]
	I1115 11:10:10.415842  644414 start.go:247] waiting for cluster config update ...
	I1115 11:10:10.415858  644414 start.go:256] writing updated cluster config ...
	I1115 11:10:10.419060  644414 out.go:203] 
	I1115 11:10:10.422348  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.422466  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.425867  644414 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	I1115 11:10:10.428658  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:10.431470  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:10.434231  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:10.434251  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:10.434373  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:10.434390  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:10.434509  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.434718  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:10.459579  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:10.459605  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:10.459619  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:10.459645  644414 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:10.459703  644414 start.go:364] duration metric: took 38.917µs to acquireMachinesLock for "ha-439113-m02"
	I1115 11:10:10.459726  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:10.459732  644414 fix.go:54] fixHost starting: m02
	I1115 11:10:10.460001  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.490667  644414 fix.go:112] recreateIfNeeded on ha-439113-m02: state=Stopped err=<nil>
	W1115 11:10:10.490698  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:10.494022  644414 out.go:252] * Restarting existing docker container for "ha-439113-m02" ...
	I1115 11:10:10.494103  644414 cli_runner.go:164] Run: docker start ha-439113-m02
	I1115 11:10:10.848234  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.876991  644414 kic.go:430] container "ha-439113-m02" state is running.
	I1115 11:10:10.877372  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:10.907598  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.907880  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:10.907948  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:10.946130  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:10.946438  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:10.946448  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:10.947277  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60346->127.0.0.1:33574: read: connection reset by peer
	I1115 11:10:14.161070  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.161137  644414 ubuntu.go:182] provisioning hostname "ha-439113-m02"
	I1115 11:10:14.161234  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.193112  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.193410  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.193421  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
	I1115 11:10:14.414884  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.415071  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.441593  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.441897  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.441920  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:14.655329  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:14.655419  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:14.655450  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:14.655485  644414 provision.go:84] configureAuth start
	I1115 11:10:14.655584  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:14.684954  644414 provision.go:143] copyHostCerts
	I1115 11:10:14.684996  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685029  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:14.685035  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685109  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:14.685187  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685203  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:14.685208  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685233  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:14.685270  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685286  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:14.685290  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685314  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:14.685358  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
	I1115 11:10:15.164962  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:15.165087  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:15.165161  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.183565  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:15.309845  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:15.309910  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:15.352565  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:15.352638  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:10:15.389073  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:15.389137  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:15.436657  644414 provision.go:87] duration metric: took 781.140009ms to configureAuth
	I1115 11:10:15.436685  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:15.436943  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:15.437049  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.467485  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:15.467817  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:15.467839  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:16.972469  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:16.972493  644414 machine.go:97] duration metric: took 6.064595432s to provisionDockerMachine
	I1115 11:10:16.972505  644414 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
	I1115 11:10:16.972515  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:16.972579  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:16.972636  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.011353  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.141531  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:17.145724  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:17.145750  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:17.145761  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:17.145819  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:17.145893  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:17.145901  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:17.146000  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:17.153864  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:17.175408  644414 start.go:296] duration metric: took 202.888277ms for postStartSetup
	I1115 11:10:17.175529  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:17.175603  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.202540  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.314494  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:17.322089  644414 fix.go:56] duration metric: took 6.862349383s for fixHost
	I1115 11:10:17.322116  644414 start.go:83] releasing machines lock for "ha-439113-m02", held for 6.862399853s
	I1115 11:10:17.322193  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:17.346984  644414 out.go:179] * Found network options:
	I1115 11:10:17.349992  644414 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 11:10:17.357013  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:10:17.357074  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:10:17.357145  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:17.357204  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.357473  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:17.357528  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.392713  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.393588  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.599074  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:17.766809  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:17.766905  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:17.789163  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:17.789191  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:17.789231  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:17.789289  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:17.815110  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:17.838070  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:17.838143  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:17.860257  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:17.879590  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:18.110145  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:18.361820  644414 docker.go:234] disabling docker service ...
	I1115 11:10:18.361900  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:18.384569  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:18.416731  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:18.641786  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:18.837399  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:18.857492  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:18.878074  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:18.878149  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.894400  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:18.894493  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.905139  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.919066  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.934192  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:18.947793  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.962215  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.975913  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.990422  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:19.001078  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:19.010948  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:19.243052  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:11:49.588377  644414 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345288768s)
	I1115 11:11:49.588399  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:11:49.588453  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:11:49.592631  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:11:49.592694  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:11:49.596673  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:11:49.627565  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:11:49.627655  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.657574  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.692786  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:11:49.695732  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:11:49.698667  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:11:49.715635  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:11:49.719827  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:49.729557  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:11:49.729790  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:49.730057  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:11:49.747197  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:11:49.747477  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
	I1115 11:11:49.747492  644414 certs.go:195] generating shared ca certs ...
	I1115 11:11:49.747509  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:11:49.747651  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:11:49.747712  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:11:49.747723  644414 certs.go:257] generating profile certs ...
	I1115 11:11:49.747793  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:11:49.747854  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8
	I1115 11:11:49.747896  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:11:49.747908  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:11:49.747922  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:11:49.747939  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:11:49.747953  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:11:49.747968  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:11:49.747979  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:11:49.747995  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:11:49.748005  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:11:49.748058  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:11:49.748100  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:11:49.748113  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:11:49.748139  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:11:49.748172  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:11:49.748196  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:11:49.748244  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:11:49.748274  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:11:49.748290  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:11:49.748302  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:49.748361  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:11:49.766640  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:11:49.865171  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 11:11:49.869248  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 11:11:49.877385  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 11:11:49.881661  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 11:11:49.890592  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 11:11:49.894372  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 11:11:49.902879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 11:11:49.906594  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 11:11:49.914879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 11:11:49.918911  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 11:11:49.928251  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 11:11:49.931713  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 11:11:49.939808  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:11:49.959417  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:11:49.979171  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:11:49.999374  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:11:50.034447  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:11:50.055956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:11:50.075858  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:11:50.096569  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:11:50.123534  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:11:50.145099  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:11:50.165838  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:11:50.187631  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 11:11:50.201727  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 11:11:50.215561  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 11:11:50.228704  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 11:11:50.243716  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 11:11:50.256646  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 11:11:50.274083  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 11:11:50.289451  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:11:50.296096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:11:50.304816  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308605  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308696  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.349933  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:11:50.357859  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:11:50.366131  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370090  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370184  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.411529  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:11:50.419530  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:11:50.428122  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.431990  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.432078  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.473336  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:11:50.481905  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:11:50.485884  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:11:50.529145  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:11:50.575458  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:11:50.618147  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:11:50.660345  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:11:50.701441  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:11:50.742918  644414 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 11:11:50.743050  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:11:50.743086  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:11:50.743137  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:11:50.756533  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:11:50.756661  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:11:50.756809  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:11:50.766452  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:11:50.766519  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 11:11:50.774299  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:11:50.787555  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:11:50.801348  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:11:50.815426  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:11:50.819361  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:50.829846  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:50.971817  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:50.986595  644414 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:11:50.987008  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:50.990541  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:11:50.993289  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:51.129111  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:51.143975  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:11:51.144052  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:11:51.144377  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175109  644414 node_ready.go:49] node "ha-439113-m02" is "Ready"
	I1115 11:11:54.175142  644414 node_ready.go:38] duration metric: took 3.030741263s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175156  644414 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:11:54.175217  644414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:11:54.191139  644414 api_server.go:72] duration metric: took 3.204498804s to wait for apiserver process to appear ...
	I1115 11:11:54.191165  644414 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:11:54.191183  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.270987  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 11:11:54.271020  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 11:11:54.691298  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.702970  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:54.703005  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.191248  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.208784  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.208820  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.691283  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.701010  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.701040  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.191695  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.205744  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:56.205779  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.691307  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.703521  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 11:11:56.706435  644414 api_server.go:141] control plane version: v1.34.1
	I1115 11:11:56.706475  644414 api_server.go:131] duration metric: took 2.515302396s to wait for apiserver health ...
	I1115 11:11:56.706484  644414 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:11:56.718211  644414 system_pods.go:59] 26 kube-system pods found
	I1115 11:11:56.718249  644414 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718259  644414 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718265  644414 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.718282  644414 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.718287  644414 system_pods.go:61] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.718291  644414 system_pods.go:61] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.718295  644414 system_pods.go:61] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.718299  644414 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.718305  644414 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.718316  644414 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.718322  644414 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.718327  644414 system_pods.go:61] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.718337  644414 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.718352  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.718361  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.718366  644414 system_pods.go:61] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.718373  644414 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.718384  644414 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.718389  644414 system_pods.go:61] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.718395  644414 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.718405  644414 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.718410  644414 system_pods.go:61] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.718414  644414 system_pods.go:61] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.718426  644414 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.718432  644414 system_pods.go:61] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.718438  644414 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.718444  644414 system_pods.go:74] duration metric: took 11.954415ms to wait for pod list to return data ...
	I1115 11:11:56.718453  644414 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:11:56.724493  644414 default_sa.go:45] found service account: "default"
	I1115 11:11:56.724536  644414 default_sa.go:55] duration metric: took 6.072136ms for default service account to be created ...
	I1115 11:11:56.724547  644414 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:11:56.819602  644414 system_pods.go:86] 26 kube-system pods found
	I1115 11:11:56.819647  644414 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819658  644414 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819664  644414 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.819670  644414 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.819674  644414 system_pods.go:89] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.819679  644414 system_pods.go:89] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.819694  644414 system_pods.go:89] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.819703  644414 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.819711  644414 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.819721  644414 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.819726  644414 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.819730  644414 system_pods.go:89] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.819738  644414 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.819747  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.819752  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.819756  644414 system_pods.go:89] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.819770  644414 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.819778  644414 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.819783  644414 system_pods.go:89] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.819789  644414 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.819797  644414 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.819803  644414 system_pods.go:89] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.819811  644414 system_pods.go:89] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.819815  644414 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.819819  644414 system_pods.go:89] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.819824  644414 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.819841  644414 system_pods.go:126] duration metric: took 95.282586ms to wait for k8s-apps to be running ...
	I1115 11:11:56.819854  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:11:56.819918  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:11:56.837030  644414 system_svc.go:56] duration metric: took 17.155047ms WaitForService to wait for kubelet
	I1115 11:11:56.837061  644414 kubeadm.go:587] duration metric: took 5.85042521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:11:56.837082  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:11:56.841207  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841239  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841253  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841257  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841262  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841265  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841282  644414 node_conditions.go:105] duration metric: took 4.194343ms to run NodePressure ...
	I1115 11:11:56.841300  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:11:56.841324  644414 start.go:256] writing updated cluster config ...
	I1115 11:11:56.844944  644414 out.go:203] 
	I1115 11:11:56.848069  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:56.848191  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.851562  644414 out.go:179] * Starting "ha-439113-m04" worker node in "ha-439113" cluster
	I1115 11:11:56.855417  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:11:56.858314  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:11:56.861196  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:11:56.861243  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:11:56.861453  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:11:56.861539  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:11:56.861554  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:11:56.861725  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.894239  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:11:56.894262  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:11:56.894277  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:11:56.894301  644414 start.go:360] acquireMachinesLock for ha-439113-m04: {Name:mke6e857e5b25fb7a1d96f7fe08934c7b44258f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:11:56.894360  644414 start.go:364] duration metric: took 38.252µs to acquireMachinesLock for "ha-439113-m04"
	I1115 11:11:56.894384  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:11:56.894391  644414 fix.go:54] fixHost starting: m04
	I1115 11:11:56.894639  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:56.934538  644414 fix.go:112] recreateIfNeeded on ha-439113-m04: state=Stopped err=<nil>
	W1115 11:11:56.934571  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:11:56.937723  644414 out.go:252] * Restarting existing docker container for "ha-439113-m04" ...
	I1115 11:11:56.937813  644414 cli_runner.go:164] Run: docker start ha-439113-m04
	I1115 11:11:57.292353  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:57.320590  644414 kic.go:430] container "ha-439113-m04" state is running.
	I1115 11:11:57.320978  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:11:57.343942  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:57.344181  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:11:57.344243  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:11:57.365933  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:11:57.366241  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:11:57.366255  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:11:57.366995  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:12:00.666212  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.666285  644414 ubuntu.go:182] provisioning hostname "ha-439113-m04"
	I1115 11:12:00.666399  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.703141  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.703457  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.703468  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m04 && echo "ha-439113-m04" | sudo tee /etc/hostname
	I1115 11:12:00.898855  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.898950  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.948730  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.949093  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.949120  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:12:01.162002  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:12:01.162071  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:12:01.162106  644414 ubuntu.go:190] setting up certificates
	I1115 11:12:01.162147  644414 provision.go:84] configureAuth start
	I1115 11:12:01.162228  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:01.189297  644414 provision.go:143] copyHostCerts
	I1115 11:12:01.189345  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189381  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:12:01.189387  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189469  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:12:01.189552  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189569  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:12:01.189574  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189602  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:12:01.189643  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189658  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:12:01.189662  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189686  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:12:01.189732  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m04 san=[127.0.0.1 192.168.49.5 ha-439113-m04 localhost minikube]
	I1115 11:12:01.793644  644414 provision.go:177] copyRemoteCerts
	I1115 11:12:01.793724  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:12:01.793769  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:01.813786  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:01.932159  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:12:01.932221  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:12:01.959503  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:12:01.959565  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:12:01.985894  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:12:01.985956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:12:02.016893  644414 provision.go:87] duration metric: took 854.716001ms to configureAuth
	I1115 11:12:02.016972  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:12:02.017324  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:02.017494  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.042340  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:02.042641  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:02.042657  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:12:02.421793  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:12:02.421855  644414 machine.go:97] duration metric: took 5.077657106s to provisionDockerMachine
	I1115 11:12:02.421891  644414 start.go:293] postStartSetup for "ha-439113-m04" (driver="docker")
	I1115 11:12:02.421937  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:12:02.422045  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:12:02.422113  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.441735  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.549972  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:12:02.553292  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:12:02.553326  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:12:02.553339  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:12:02.553398  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:12:02.553481  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:12:02.553492  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:12:02.553591  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:12:02.561640  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:02.581188  644414 start.go:296] duration metric: took 159.246745ms for postStartSetup
	I1115 11:12:02.581283  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:12:02.581334  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.598560  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.702117  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:12:02.707693  644414 fix.go:56] duration metric: took 5.813294693s for fixHost
	I1115 11:12:02.707719  644414 start.go:83] releasing machines lock for "ha-439113-m04", held for 5.813345581s
	I1115 11:12:02.707815  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:02.727805  644414 out.go:179] * Found network options:
	I1115 11:12:02.730701  644414 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 11:12:02.733528  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733564  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733599  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733615  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:12:02.733685  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:12:02.733735  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.734056  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:12:02.734115  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.762180  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.770444  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.906742  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:12:02.982777  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:12:02.982870  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:12:02.991311  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:12:02.991334  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:12:02.991372  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:12:02.991426  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:12:03.010259  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:12:03.026209  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:12:03.026295  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:12:03.042235  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:12:03.056541  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:12:03.207440  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:12:03.335536  644414 docker.go:234] disabling docker service ...
	I1115 11:12:03.335651  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:12:03.353883  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:12:03.369431  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:12:03.486211  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:12:03.610710  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:12:03.625360  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:12:03.641312  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:12:03.641378  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.651264  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:12:03.651338  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.665109  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.675589  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.686503  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:12:03.694865  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.705871  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.714726  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.723852  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:12:03.731853  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:12:03.740511  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:03.853255  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:12:04.003040  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:12:04.003163  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:12:04.007573  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:12:04.007728  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:12:04.014385  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:12:04.042291  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:12:04.042400  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.076162  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.110265  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:12:04.113250  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:12:04.116130  644414 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 11:12:04.118985  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:12:04.135746  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:12:04.140419  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.151141  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:12:04.151383  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.151632  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:12:04.169829  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:12:04.170121  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.5
	I1115 11:12:04.170137  644414 certs.go:195] generating shared ca certs ...
	I1115 11:12:04.170152  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:12:04.170287  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:12:04.170332  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:12:04.170347  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:12:04.170362  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:12:04.170377  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:12:04.170392  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:12:04.170455  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:12:04.170489  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:12:04.170502  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:12:04.170528  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:12:04.170554  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:12:04.170579  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:12:04.170625  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:04.170653  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.170666  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.170682  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.170703  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:12:04.192999  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:12:04.214491  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:12:04.238386  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:12:04.261791  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:12:04.282186  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:12:04.301663  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:12:04.323494  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:12:04.330506  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:12:04.339641  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343359  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343471  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.384944  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:12:04.393726  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:12:04.401885  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405917  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405984  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.448096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:12:04.456341  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:12:04.464809  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469548  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469657  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.512809  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:12:04.521564  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:12:04.525477  644414 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:12:04.525571  644414 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1115 11:12:04.525671  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:12:04.525750  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:12:04.534631  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:12:04.534732  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1115 11:12:04.542762  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:12:04.555474  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:12:04.568549  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:12:04.572246  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.582645  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.720397  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.734431  644414 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1115 11:12:04.734793  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.737605  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:12:04.740524  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.870273  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.886167  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:12:04.886294  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:12:04.886567  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890505  644414 node_ready.go:49] node "ha-439113-m04" is "Ready"
	I1115 11:12:04.890532  644414 node_ready.go:38] duration metric: took 3.920221ms for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890569  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:12:04.890627  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:12:04.906249  644414 system_svc.go:56] duration metric: took 15.693042ms WaitForService to wait for kubelet
	I1115 11:12:04.906349  644414 kubeadm.go:587] duration metric: took 171.724556ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:12:04.906397  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:12:04.916259  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916376  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916421  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916457  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916477  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916512  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916538  644414 node_conditions.go:105] duration metric: took 10.120472ms to run NodePressure ...
	I1115 11:12:04.916592  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:12:04.916629  644414 start.go:256] writing updated cluster config ...
	I1115 11:12:04.917071  644414 ssh_runner.go:195] Run: rm -f paused
	I1115 11:12:04.922331  644414 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:12:04.922989  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:12:04.955742  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:12:06.963336  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:08.980310  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:11.479328  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:13.964446  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:16.463626  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:18.465383  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:20.962686  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:22.964048  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:24.966447  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:27.463942  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:29.466713  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	I1115 11:12:30.462795  644414 pod_ready.go:94] pod "coredns-66bc5c9577-4g6sm" is "Ready"
	I1115 11:12:30.462820  644414 pod_ready.go:86] duration metric: took 25.506978071s for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.462830  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.469415  644414 pod_ready.go:94] pod "coredns-66bc5c9577-mlm6m" is "Ready"
	I1115 11:12:30.469441  644414 pod_ready.go:86] duration metric: took 6.60411ms for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.473231  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480070  644414 pod_ready.go:94] pod "etcd-ha-439113" is "Ready"
	I1115 11:12:30.480096  644414 pod_ready.go:86] duration metric: took 6.837381ms for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480106  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486550  644414 pod_ready.go:94] pod "etcd-ha-439113-m02" is "Ready"
	I1115 11:12:30.486578  644414 pod_ready.go:86] duration metric: took 6.465838ms for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486589  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.657170  644414 request.go:683] "Waited before sending request" delay="167.271906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:30.660251  644414 pod_ready.go:99] pod "etcd-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "etcd-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:30.660271  644414 pod_ready.go:86] duration metric: took 173.674417ms for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.856532  644414 request.go:683] "Waited before sending request" delay="196.157902ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 11:12:30.862230  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.056631  644414 request.go:683] "Waited before sending request" delay="194.303781ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113"
	I1115 11:12:31.256567  644414 request.go:683] "Waited before sending request" delay="196.320457ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:31.260364  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113" is "Ready"
	I1115 11:12:31.260440  644414 pod_ready.go:86] duration metric: took 398.184225ms for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.260460  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.456733  644414 request.go:683] "Waited before sending request" delay="196.195936ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m02"
	I1115 11:12:31.657283  644414 request.go:683] "Waited before sending request" delay="189.364553ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:31.669486  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113-m02" is "Ready"
	I1115 11:12:31.669527  644414 pod_ready.go:86] duration metric: took 409.053455ms for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.669545  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.856759  644414 request.go:683] "Waited before sending request" delay="187.140315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m03"
	I1115 11:12:32.057081  644414 request.go:683] "Waited before sending request" delay="194.340659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:32.060246  644414 pod_ready.go:99] pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "kube-apiserver-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:32.060269  644414 pod_ready.go:86] duration metric: took 390.716754ms for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.256765  644414 request.go:683] "Waited before sending request" delay="196.346784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 11:12:32.260967  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.457411  644414 request.go:683] "Waited before sending request" delay="196.343854ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:32.656543  644414 request.go:683] "Waited before sending request" delay="195.259075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:32.857312  644414 request.go:683] "Waited before sending request" delay="95.237723ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:33.056759  644414 request.go:683] "Waited before sending request" delay="193.348543ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.456512  644414 request.go:683] "Waited before sending request" delay="191.213474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.857248  644414 request.go:683] "Waited before sending request" delay="92.163849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	W1115 11:12:34.268915  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:36.769187  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:38.769594  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:40.775431  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:43.268655  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	I1115 11:12:45.275032  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113" is "Ready"
	I1115 11:12:45.275075  644414 pod_ready.go:86] duration metric: took 13.01407493s for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.275087  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305482  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m02" is "Ready"
	I1115 11:12:45.305509  644414 pod_ready.go:86] duration metric: took 30.414418ms for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305520  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.308592  644414 pod_ready.go:99] pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace is gone: getting pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace (will retry): pods "kube-controller-manager-ha-439113-m03" not found
	I1115 11:12:45.308616  644414 pod_ready.go:86] duration metric: took 3.088777ms for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.312595  644414 pod_ready.go:83] waiting for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319584  644414 pod_ready.go:94] pod "kube-proxy-2fgtm" is "Ready"
	I1115 11:12:45.319658  644414 pod_ready.go:86] duration metric: took 6.96691ms for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319684  644414 pod_ready.go:83] waiting for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333364  644414 pod_ready.go:94] pod "kube-proxy-k7bcn" is "Ready"
	I1115 11:12:45.333446  644414 pod_ready.go:86] duration metric: took 13.743575ms for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333472  644414 pod_ready.go:83] waiting for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.461841  644414 request.go:683] "Waited before sending request" delay="128.26876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgftx"
	I1115 11:12:45.662133  644414 request.go:683] "Waited before sending request" delay="196.336603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:45.666231  644414 pod_ready.go:94] pod "kube-proxy-kgftx" is "Ready"
	I1115 11:12:45.666259  644414 pod_ready.go:86] duration metric: took 332.766862ms for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.862402  644414 request.go:683] "Waited before sending request" delay="196.047882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1115 11:12:45.868100  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.061503  644414 request.go:683] "Waited before sending request" delay="193.299208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113"
	I1115 11:12:46.262349  644414 request.go:683] "Waited before sending request" delay="196.337092ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:46.266390  644414 pod_ready.go:94] pod "kube-scheduler-ha-439113" is "Ready"
	I1115 11:12:46.266415  644414 pod_ready.go:86] duration metric: took 398.289218ms for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.266426  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.461857  644414 request.go:683] "Waited before sending request" delay="195.354736ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:46.662164  644414 request.go:683] "Waited before sending request" delay="196.315389ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:46.862451  644414 request.go:683] "Waited before sending request" delay="95.198714ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:47.062064  644414 request.go:683] "Waited before sending request" delay="194.32444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.462004  644414 request.go:683] "Waited before sending request" delay="191.259764ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.862129  644414 request.go:683] "Waited before sending request" delay="91.206426ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	W1115 11:12:48.273067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:50.273503  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:52.273873  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:54.774253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:56.774741  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:59.273054  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:01.273531  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:03.274007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:05.773995  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:08.274070  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:10.774950  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:13.273142  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:15.774523  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:18.275146  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:20.775066  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:23.273644  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:25.772983  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:27.773086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:29.774439  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:32.274282  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:34.773274  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:36.774007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:38.774499  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:41.272920  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:43.272980  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:45.290069  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:47.774370  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:49.775099  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:52.273471  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:54.774040  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:56.776828  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:58.777477  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:01.274086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:03.774603  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:06.274270  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:08.776333  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:11.274406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:13.775288  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:16.274470  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:18.774609  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:21.275329  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:23.773704  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:25.781356  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:28.273802  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:30.773867  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:33.273730  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:35.274388  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:37.774988  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:40.273650  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:42.274574  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:44.775136  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:47.273253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:49.774129  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:52.274209  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:54.773957  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:56.774057  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:58.774103  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:00.794798  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:03.273466  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:05.274892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:07.773906  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:09.775150  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:12.274372  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:14.773892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:16.774210  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:19.273576  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:21.773796  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:24.273997  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:26.274175  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:28.775134  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:31.275044  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:33.773408  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:35.774067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:37.774322  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:40.273391  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:42.275088  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:44.773835  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:46.773944  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:49.273345  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:51.274206  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:53.275406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:55.276298  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:57.773509  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:59.773622  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:01.773991  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:04.273687  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	I1115 11:16:04.922792  644414 pod_ready.go:86] duration metric: took 3m18.656348919s for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:16:04.922828  644414 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1115 11:16:04.922844  644414 pod_ready.go:40] duration metric: took 4m0.000432421s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:16:04.926118  644414 out.go:203] 
	W1115 11:16:04.928902  644414 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1115 11:16:04.931693  644414 out.go:203] 
	
	
	==> CRI-O <==
	Nov 15 11:12:27 ha-439113 crio[666]: time="2025-11-15T11:12:27.920544626Z" level=info msg="Started container" PID=1433 containerID=45eb4921c003b25c5119ab01196399bab3eb8157fb07652ba3dcd97194afeb00 description=kube-system/kube-controller-manager-ha-439113/kube-controller-manager id=fa832c19-eb18-47af-80d3-4790cad3225e name=/runtime.v1.RuntimeService/StartContainer sandboxID=21e90ac59d7247826fca1e350ef4c6d641540ffb41065bb8d5e3136341a1f7e4
	Nov 15 11:12:28 ha-439113 conmon[1137]: conmon d86466a64c1754474a32 <ninfo>: container 1142 exited with status 1
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.303366553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=776f7c67-301a-4655-9f1e-c0f4d2b6bdaf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.306045894Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01b7c975-ef4d-4609-85fa-e323353431bd name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.308511994Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8f6d110a-f199-4160-b315-87aac4712b71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.308610668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.319769952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320004347Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1658f23bf43e3861272003631cb2125f6cd69132a0a16a46de920e7b647021eb/merged/etc/passwd: no such file or directory"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320027059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1658f23bf43e3861272003631cb2125f6cd69132a0a16a46de920e7b647021eb/merged/etc/group: no such file or directory"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320305901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.388496736Z" level=info msg="Created container 4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68: kube-system/storage-provisioner/storage-provisioner" id=8f6d110a-f199-4160-b315-87aac4712b71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.38961912Z" level=info msg="Starting container: 4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68" id=bfef2a5f-46f3-44e9-9266-3ac15c3e2f60 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.393175299Z" level=info msg="Started container" PID=1445 containerID=4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68 description=kube-system/storage-provisioner/storage-provisioner id=bfef2a5f-46f3-44e9-9266-3ac15c3e2f60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94d3e897f0476e4f3abaa049d7990fde57c5406c8c5bb70e73a7146a92b5c99a
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.422814838Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.426273738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.426311481Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.42633375Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.435633901Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.43567025Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.435692969Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443292786Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443437303Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443463231Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.447544594Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.447580648Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	4307de9c87d36       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       4                   94d3e897f0476       storage-provisioner                 kube-system
	45eb4921c003b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Running             kube-controller-manager   6                   21e90ac59d724       kube-controller-manager-ha-439113   kube-system
	56ca04edf5389       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   2                   b9f35a414830a       busybox-7b57f96db7-vddcm            default
	16ebc70b03ad3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 minutes ago       Running             kube-proxy                2                   dbf5fcdbf92d1       kube-proxy-k7bcn                    kube-system
	ff8f6f3f30d64       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   d43213c9afa20       coredns-66bc5c9577-mlm6m            kube-system
	66d3cca12da72       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   8504950f9102e       coredns-66bc5c9577-4g6sm            kube-system
	624e9c4484de9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               2                   02b3165dd3170       kindnet-q4kpj                       kube-system
	d86466a64c175       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       3                   94d3e897f0476       storage-provisioner                 kube-system
	be71898116747       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   5                   21e90ac59d724       kube-controller-manager-ha-439113   kube-system
	d24d48c3f9b01       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   4 minutes ago       Running             kube-apiserver            3                   80d29a5d57c81       kube-apiserver-ha-439113            kube-system
	ab0d0c34b46d5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   5 minutes ago       Running             etcd                      2                   e3e01caa47fdb       etcd-ha-439113                      kube-system
	f5462600e253c       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   5 minutes ago       Running             kube-vip                  2                   c0b629ba4b9ea       kube-vip-ha-439113                  kube-system
	c9aa769ac1e41       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   5 minutes ago       Exited              kube-apiserver            2                   80d29a5d57c81       kube-apiserver-ha-439113            kube-system
	e0b918dd4970f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   5 minutes ago       Running             kube-scheduler            2                   1552e5cdb042a       kube-scheduler-ha-439113            kube-system
	
	
	==> coredns [66d3cca12da72808d1018e1a6ec972546fda6374c31dd377d5d8dc684e2ceb3e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34700 - 4439 "HINFO IN 6986068788273380099.6825403624280059219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030217966s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ff8f6f3f30d64dbd44181797a52d66d21ee28c0ae7639d5d1bdbffd3052c24be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40461 - 514 "HINFO IN 2475121785806463085.1107501801826590384. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005830505s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-439113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_52_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:52:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:16:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 11:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-439113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6518a9f9-bb2d-42ae-b78a-3db01b5306a4
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vddcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-4g6sm             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 coredns-66bc5c9577-mlm6m             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 etcd-ha-439113                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         23m
	  kube-system                 kindnet-q4kpj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-439113             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-439113    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-k7bcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-439113             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-439113                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m4s                   kube-proxy       
	  Normal   Starting                 7m57s                  kube-proxy       
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   NodeHasSufficientPID     23m                    kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   Starting                 23m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 23m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  23m                    kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m                    kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeReady                22m                    kubelet          Node ha-439113 status is now: NodeReady
	  Normal   RegisteredNode           21m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     8m26s (x8 over 8m26s)  kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m                     node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           7m9s                   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m58s (x8 over 5m58s)  kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           3m35s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	
	
	Name:               ha-439113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_53_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:53:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:16:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-439113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d3455c64-e9a7-4ebe-b716-3cc9dc8ab51a
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6x277                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 etcd-ha-439113-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-mcj42                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-439113-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-439113-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-kgftx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-439113-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-439113-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 22m                    kube-proxy       
	  Normal   Starting                 3m30s                  kube-proxy       
	  Normal   Starting                 7m35s                  kube-proxy       
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           21m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   NodeNotReady             17m                    node-controller  Node ha-439113-m02 status is now: NodeNotReady
	  Normal   Starting                 8m22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m22s (x8 over 8m22s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node ha-439113-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           8m                     node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           7m9s                   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   Starting                 5m54s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m54s (x8 over 5m54s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m54s (x8 over 5m54s)  kubelet          Node ha-439113-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m54s (x8 over 5m54s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        4m54s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           3m35s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	
	
	Name:               ha-439113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_56_52_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:56:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:16:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-439113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                bf4456d3-e8dc-4a97-8e4f-cb829c9a4b90
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-trswm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-4k2k2               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-proxy-2fgtm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m56s                  kube-proxy       
	  Normal   Starting                 3m34s                  kube-proxy       
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   Starting                 19m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           19m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasSufficientPID     19m (x3 over 19m)      kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  19m (x3 over 19m)      kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x3 over 19m)      kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           19m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeReady                18m                    kubelet          Node ha-439113-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m                     node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   Starting                 7m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m17s (x8 over 7m20s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m17s (x8 over 7m20s)  kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m17s (x8 over 7m20s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             7m10s                  node-controller  Node ha-439113-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           7m9s                   node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Warning  CgroupV1                 4m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 4m8s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasSufficientMemory  4m5s (x8 over 4m8s)    kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m5s (x8 over 4m8s)    kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m5s (x8 over 4m8s)    kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m35s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[Nov15 10:39] overlayfs: idmapped layers are currently not supported
	[Nov15 10:52] overlayfs: idmapped layers are currently not supported
	[Nov15 10:53] overlayfs: idmapped layers are currently not supported
	[Nov15 10:54] overlayfs: idmapped layers are currently not supported
	[Nov15 10:56] overlayfs: idmapped layers are currently not supported
	[Nov15 10:58] overlayfs: idmapped layers are currently not supported
	[Nov15 11:07] overlayfs: idmapped layers are currently not supported
	[  +3.621339] overlayfs: idmapped layers are currently not supported
	[Nov15 11:08] overlayfs: idmapped layers are currently not supported
	[Nov15 11:09] overlayfs: idmapped layers are currently not supported
	[Nov15 11:10] overlayfs: idmapped layers are currently not supported
	[  +3.526164] overlayfs: idmapped layers are currently not supported
	[Nov15 11:12] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab0d0c34b46d585c39a39112a9d96382b3c2d54b036b01e5aabb4c9adb26fe48] <==
	{"level":"warn","ts":"2025-11-15T11:11:53.896124Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.790585Z","time spent":"7.105534206s","remote":"127.0.0.1:33982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896135Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.790568Z","time spent":"7.105562742s","remote":"127.0.0.1:33412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896145Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.786656Z","time spent":"7.109486177s","remote":"127.0.0.1:33262","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896155Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.784378Z","time spent":"7.111774446s","remote":"127.0.0.1:33190","response type":"/etcdserverpb.KV/Range","request count":0,"request size":21,"response count":0,"response size":0,"request content":"key:\"/registry/secrets\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896356Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803214Z","time spent":"7.093138803s","remote":"127.0.0.1:33206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896367Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803180Z","time spent":"7.093183504s","remote":"127.0.0.1:33644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896378Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803030Z","time spent":"7.093345383s","remote":"127.0.0.1:33792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896390Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803134Z","time spent":"7.09325027s","remote":"127.0.0.1:33532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896420Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.784174Z","time spent":"7.11223919s","remote":"127.0.0.1:33404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" limit:10000 "}
	{"level":"info","ts":"2025-11-15T11:11:53.896435Z","caller":"traceutil/trace.go:172","msg":"trace[1793027741] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"7.098153671s","start":"2025-11-15T11:11:46.798274Z","end":"2025-11-15T11:11:53.896428Z","steps":["trace[1793027741] 'agreement among raft nodes before linearized reading'  (duration: 7.076120166s)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T11:11:53.896596Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803160Z","time spent":"7.09342672s","remote":"127.0.0.1:33862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896624Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.797673Z","time spent":"7.098946471s","remote":"127.0.0.1:33510","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" limit:500 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896636Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.797656Z","time spent":"7.098975567s","remote":"127.0.0.1:33584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":0,"request content":"key:\"/registry/ipaddresses\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896661Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.797642Z","time spent":"7.099002897s","remote":"127.0.0.1:33340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":0,"response size":0,"request content":"key:\"/registry/minions/ha-439113\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896674Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.804064Z","time spent":"7.092606129s","remote":"127.0.0.1:33412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897422Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803114Z","time spent":"7.094295237s","remote":"127.0.0.1:33808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattributesclasses\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897453Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803097Z","time spent":"7.094349005s","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897466Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803071Z","time spent":"7.094390744s","remote":"127.0.0.1:33870","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897881Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803052Z","time spent":"7.094816547s","remote":"127.0.0.1:33820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/ha-439113\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897906Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.802979Z","time spent":"7.094921983s","remote":"127.0.0.1:34108","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/resourceclaimtemplates/\" range_end:\"/registry/resourceclaimtemplates0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897918Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803012Z","time spent":"7.094902036s","remote":"127.0.0.1:33754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.895670Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.793133Z","time spent":"7.102533588s","remote":"127.0.0.1:34062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":27,"response count":0,"response size":0,"request content":"key:\"/registry/deviceclasses\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.953288Z","caller":"etcdserver/v3_server.go:888","msg":"ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader","sent-request-id":8128041333002731821,"received-request-id":8128041333002731820}
	{"level":"info","ts":"2025-11-15T11:11:54.143241Z","caller":"traceutil/trace.go:172","msg":"trace[808255463] linearizableReadLoop","detail":"{readStateIndex:4470; appliedIndex:4470; }","duration":"172.978406ms","start":"2025-11-15T11:11:53.970246Z","end":"2025-11-15T11:11:54.143224Z","steps":["trace[808255463] 'read index received'  (duration: 172.965146ms)","trace[808255463] 'applied index is now lower than readState.Index'  (duration: 12.513µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T11:11:54.143416Z","caller":"traceutil/trace.go:172","msg":"trace[686608144] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3721; }","duration":"174.658102ms","start":"2025-11-15T11:11:53.968751Z","end":"2025-11-15T11:11:54.143410Z","steps":["trace[686608144] 'agreement among raft nodes before linearized reading'  (duration: 174.609536ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:16:06 up  2:58,  0 user,  load average: 0.46, 1.09, 1.35
	Linux ha-439113 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [624e9c4484de9254bf51adb5f68cf3ee64fa67c57ec0731d0bf92706a6167a9c] <==
	I1115 11:15:18.430971       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:15:28.421571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:15:28.421606       1 main.go:301] handling current node
	I1115 11:15:28.421621       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:15:28.421632       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:15:28.421853       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:15:28.421861       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:15:38.424929       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:15:38.424981       1 main.go:301] handling current node
	I1115 11:15:38.424998       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:15:38.425004       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:15:38.425206       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:15:38.425223       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:15:48.426519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:15:48.426554       1 main.go:301] handling current node
	I1115 11:15:48.426571       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:15:48.426577       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:15:48.428224       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:15:48.428258       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:15:58.425018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:15:58.425129       1 main.go:301] handling current node
	I1115 11:15:58.425169       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:15:58.425207       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:15:58.425384       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:15:58.425421       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c9aa769ac1e410d0690ad31ea1ef812bb7de4c70e937d471392caf66737a2862] <==
	{"level":"warn","ts":"2025-11-15T11:11:11.780145Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001588b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780169Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001d63860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780193Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780223Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780249Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780277Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40022752c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780304Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023e4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780333Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026c3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780359Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026c3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019be3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780406Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002bd4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780427Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002bd4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780448Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fe960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780469Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001798960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780496Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001798960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780520Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015881e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780543Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019bed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780567Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025c4d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780589Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ce5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780615Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ce5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780660Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780685Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400201a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1115 11:11:17.182112       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-11-15T11:11:17.353763Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400250af00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-apiserver [d24d48c3f9b01e8a715249be7330e6cfad6f59261b7723b5de70efa554928964] <==
	I1115 11:11:54.167816       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:11:54.174315       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:11:54.174482       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:11:54.197129       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 11:11:54.198171       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:11:54.225142       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 11:11:54.260659       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:11:54.275062       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:11:54.276988       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:11:54.298453       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:11:54.354535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1115 11:11:54.363129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1115 11:11:54.364714       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:11:54.378229       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:11:54.378262       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:11:54.378385       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:11:54.401493       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1115 11:11:54.415287       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1115 11:11:54.477155       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:11:54.477232       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:11:55.801917       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1115 11:11:56.437942       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1115 11:12:01.275927       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:12:31.830683       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 11:12:37.901647       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [45eb4921c003b25c5119ab01196399bab3eb8157fb07652ba3dcd97194afeb00] <==
	I1115 11:12:31.388664       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 11:12:31.392138       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:12:31.392251       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	I1115 11:12:31.393934       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:12:31.394231       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:12:31.405980       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 11:12:31.406062       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:12:31.406150       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:12:31.406185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 11:12:31.407984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 11:12:31.413011       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:12:31.413156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:12:31.418877       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 11:12:31.426163       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:12:31.428921       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 11:12:31.429031       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 11:12:31.429079       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:12:31.434013       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:12:31.441466       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:12:31.446524       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:12:31.449650       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:12:31.481074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:12:31.481105       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:12:31.481113       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:12:31.519964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41] <==
	I1115 11:11:30.275411       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:11:31.365181       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1115 11:11:31.365208       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:11:31.368367       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1115 11:11:31.370810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:11:31.370917       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1115 11:11:31.371086       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1115 11:11:41.387475       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [16ebc70b03ad38e3a7e5abff3cead02f628f4a722d181136401c1a8c416ae823] <==
	I1115 11:12:01.396280       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:12:01.491396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:12:01.592661       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:12:01.592701       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 11:12:01.592780       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:12:01.742121       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:12:01.742188       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:12:01.763218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:12:01.764138       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:12:01.764797       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:12:01.789051       1 config.go:200] "Starting service config controller"
	I1115 11:12:01.789146       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:12:01.789599       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:12:01.789660       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:12:01.789732       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:12:01.789761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:12:01.794216       1 config.go:309] "Starting node config controller"
	I1115 11:12:01.794306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:12:01.794337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:12:01.890300       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:12:01.890346       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:12:01.890389       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e0b918dd4970fd4deab2473f719156caad36c70e91836ec9407fd62c0e66c2f1] <==
	E1115 11:10:59.658870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:11:00.345811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:11:00.432409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:11:02.384472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:11:02.426048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:11:21.983897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:11:23.265022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:11:26.946829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:11:27.361077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:11:28.929218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:11:29.282135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:11:29.741098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:11:31.948528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:11:32.427201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:11:32.729768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:11:33.157818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:11:34.701567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:11:35.752287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:11:36.951331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:11:37.615660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:11:38.448988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:11:40.797158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:11:41.756113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:11:44.289532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1115 11:12:21.588534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844253     802 projected.go:196] Error preparing data for projected volume kube-api-access-sd5j8 for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844286     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8 podName:6a63ca66-7de2-40d8-96f0-a99da4ba3411 nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844277125 +0000 UTC m=+109.205504722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sd5j8" (UniqueName: "kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8") pod "storage-provisioner" (UID: "6a63ca66-7de2-40d8-96f0-a99da4ba3411") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844314     802 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844326     802 projected.go:196] Error preparing data for projected volume kube-api-access-5ghqb for pod default/busybox-7b57f96db7-vddcm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844354     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb podName:92adc10b-e910-45d1-8267-ee2e884d0dcc nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844345777 +0000 UTC m=+109.205573365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5ghqb" (UniqueName: "kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb") pod "busybox-7b57f96db7-vddcm" (UID: "92adc10b-e910-45d1-8267-ee2e884d0dcc") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844373     802 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844479     802 projected.go:196] Error preparing data for projected volume kube-api-access-b6xlh for pod kube-system/coredns-66bc5c9577-4g6sm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844521     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh podName:9460f377-28d8-418c-9dab-9428dfbfca1d nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844511856 +0000 UTC m=+109.205739445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b6xlh" (UniqueName: "kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh") pod "coredns-66bc5c9577-4g6sm" (UID: "9460f377-28d8-418c-9dab-9428dfbfca1d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:57 ha-439113 kubelet[802]: I1115 11:11:57.908131     802 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:11:58 ha-439113 kubelet[802]: W1115 11:11:58.358260     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04 WatchSource:0}: Error finding container 8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04: Status 404 returned error can't find the container with id 8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04
	Nov 15 11:11:58 ha-439113 kubelet[802]: W1115 11:11:58.418603     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280 WatchSource:0}: Error finding container d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280: Status 404 returned error can't find the container with id d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.705715     802 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.705866     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4718f104-1eea-4e92-b339-dc6ae067eee3-kube-proxy podName:4718f104-1eea-4e92-b339-dc6ae067eee3 nodeName:}" failed. No retries permitted until 2025-11-15 11:12:00.70583574 +0000 UTC m=+112.067063329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4718f104-1eea-4e92-b339-dc6ae067eee3-kube-proxy") pod "kube-proxy-k7bcn" (UID: "4718f104-1eea-4e92-b339-dc6ae067eee3") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911022     802 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911067     802 projected.go:196] Error preparing data for projected volume kube-api-access-5ghqb for pod default/busybox-7b57f96db7-vddcm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911165     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb podName:92adc10b-e910-45d1-8267-ee2e884d0dcc nodeName:}" failed. No retries permitted until 2025-11-15 11:12:00.91114076 +0000 UTC m=+112.272368357 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5ghqb" (UniqueName: "kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb") pod "busybox-7b57f96db7-vddcm" (UID: "92adc10b-e910-45d1-8267-ee2e884d0dcc") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:12:00 ha-439113 kubelet[802]: I1115 11:12:00.852948     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:00 ha-439113 kubelet[802]: E1115 11:12:00.853132     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-439113_kube-system(61daecae9db4def537bd68f54312f1ae)\"" pod="kube-system/kube-controller-manager-ha-439113" podUID="61daecae9db4def537bd68f54312f1ae"
	Nov 15 11:12:01 ha-439113 kubelet[802]: W1115 11:12:01.080611     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397 WatchSource:0}: Error finding container b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397: Status 404 returned error can't find the container with id b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397
	Nov 15 11:12:08 ha-439113 kubelet[802]: E1115 11:12:08.835937     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/54bc03e5aa3c6fcbbe6935a8420792c10e6b1241a59bf0fdde396399ed9639de/diff" to get inode usage: stat /var/lib/containers/storage/overlay/54bc03e5aa3c6fcbbe6935a8420792c10e6b1241a59bf0fdde396399ed9639de/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/3.log: no such file or directory
	Nov 15 11:12:08 ha-439113 kubelet[802]: E1115 11:12:08.849660     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eb045b83b5da536e46c3745bb2a8803b5c05df65a3052a5d8a939a5b61aff0de/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eb045b83b5da536e46c3745bb2a8803b5c05df65a3052a5d8a939a5b61aff0de/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/4.log: no such file or directory
	Nov 15 11:12:12 ha-439113 kubelet[802]: I1115 11:12:12.853172     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:12 ha-439113 kubelet[802]: E1115 11:12:12.853836     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-439113_kube-system(61daecae9db4def537bd68f54312f1ae)\"" pod="kube-system/kube-controller-manager-ha-439113" podUID="61daecae9db4def537bd68f54312f1ae"
	Nov 15 11:12:27 ha-439113 kubelet[802]: I1115 11:12:27.852165     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:28 ha-439113 kubelet[802]: I1115 11:12:28.302685     802 scope.go:117] "RemoveContainer" containerID="d86466a64c1754474a329490ff47ef2c868ab7ca5cee646b6d77e75e89205609"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-439113 -n ha-439113
helpers_test.go:269: (dbg) Run:  kubectl --context ha-439113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (366.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-439113" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-439113\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-439113\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-439113\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-439113
helpers_test.go:243: (dbg) docker inspect ha-439113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	        "Created": "2025-11-15T10:52:17.169946413Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:10:01.380531105Z",
	            "FinishedAt": "2025-11-15T11:10:00.266325121Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hosts",
	        "LogPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc-json.log",
	        "Name": "/ha-439113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-439113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-439113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	                "LowerDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-439113",
	                "Source": "/var/lib/docker/volumes/ha-439113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-439113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-439113",
	                "name.minikube.sigs.k8s.io": "ha-439113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1552653af76d6dd7c6162ea9f89df1884eadd013a674c8ab945e116cac5292c2",
	            "SandboxKey": "/var/run/docker/netns/1552653af76d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33570"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33573"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33571"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33572"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-439113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:f1:61:d7:6f:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70b4341e58399e11a79033573f4328a7d843f08aeced339b6115cf0c5d327007",
	                    "EndpointID": "ecb9ec3e068adfb90b6cea007bf9d7996cf48ef1255455853c88ec25ad196b03",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-439113",
	                        "d546a4fc19d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-439113 -n ha-439113
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 logs -n 25: (1.510325239s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m04:/home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp testdata/cp-test.txt ha-439113-m04:/home/docker/cp-test.txt                                                             │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m04.txt │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m04_ha-439113.txt                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113.txt                                                 │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m02 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ node    │ ha-439113 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:58 UTC │
	│ node    │ ha-439113 node start m02 --alsologtostderr -v 5                                                                                      │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:58 UTC │                     │
	│ node    │ ha-439113 node list --alsologtostderr -v 5                                                                                           │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:06 UTC │                     │
	│ stop    │ ha-439113 stop --alsologtostderr -v 5                                                                                                │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:06 UTC │ 15 Nov 25 11:07 UTC │
	│ start   │ ha-439113 start --wait true --alsologtostderr -v 5                                                                                   │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:07 UTC │ 15 Nov 25 11:09 UTC │
	│ node    │ ha-439113 node list --alsologtostderr -v 5                                                                                           │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │                     │
	│ node    │ ha-439113 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │ 15 Nov 25 11:09 UTC │
	│ stop    │ ha-439113 stop --alsologtostderr -v 5                                                                                                │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │ 15 Nov 25 11:10 UTC │
	│ start   │ ha-439113 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:10:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:10:01.082148  644414 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:10:01.082358  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082389  644414 out.go:374] Setting ErrFile to fd 2...
	I1115 11:10:01.082410  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082810  644414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:10:01.083841  644414 out.go:368] Setting JSON to false
	I1115 11:10:01.084783  644414 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10352,"bootTime":1763194649,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:10:01.084926  644414 start.go:143] virtualization:  
	I1115 11:10:01.088178  644414 out.go:179] * [ha-439113] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:10:01.092058  644414 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:10:01.092190  644414 notify.go:221] Checking for updates...
	I1115 11:10:01.098137  644414 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:10:01.101114  644414 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:01.104087  644414 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:10:01.107082  644414 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:10:01.110104  644414 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:10:01.113527  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:01.114129  644414 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:10:01.149515  644414 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:10:01.149650  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.214815  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.203630276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.214940  644414 docker.go:319] overlay module found
	I1115 11:10:01.218203  644414 out.go:179] * Using the docker driver based on existing profile
	I1115 11:10:01.222067  644414 start.go:309] selected driver: docker
	I1115 11:10:01.222095  644414 start.go:930] validating driver "docker" against &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.222249  644414 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:10:01.222374  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.290199  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.272152631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.290633  644414 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:10:01.290666  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:01.290735  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:01.290785  644414 start.go:353] cluster config:
	{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.295923  644414 out.go:179] * Starting "ha-439113" primary control-plane node in "ha-439113" cluster
	I1115 11:10:01.298854  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:01.301829  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:01.304672  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:01.304725  644414 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:10:01.304736  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:01.304766  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:01.304826  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:01.304837  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:01.305022  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.325510  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:01.325535  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:01.325557  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:01.325582  644414 start.go:360] acquireMachinesLock for ha-439113: {Name:mk8f5fddf42cbee62c5cd775824daee5e174c730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:01.325648  644414 start.go:364] duration metric: took 38.851µs to acquireMachinesLock for "ha-439113"
	I1115 11:10:01.325671  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:01.325676  644414 fix.go:54] fixHost starting: 
	I1115 11:10:01.325927  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.343552  644414 fix.go:112] recreateIfNeeded on ha-439113: state=Stopped err=<nil>
	W1115 11:10:01.343585  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:01.346902  644414 out.go:252] * Restarting existing docker container for "ha-439113" ...
	I1115 11:10:01.347040  644414 cli_runner.go:164] Run: docker start ha-439113
	I1115 11:10:01.611121  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.630743  644414 kic.go:430] container "ha-439113" state is running.
	I1115 11:10:01.631322  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:01.657614  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.657847  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:01.657906  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:01.682277  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:01.682596  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:01.682604  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:01.683536  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:10:04.832447  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:04.832472  644414 ubuntu.go:182] provisioning hostname "ha-439113"
	I1115 11:10:04.832543  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:04.850661  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:04.850981  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:04.850997  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113 && echo "ha-439113" | sudo tee /etc/hostname
	I1115 11:10:05.019162  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:05.019373  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:05.040944  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:05.041275  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:05.041312  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:05.193601  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:05.193631  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:05.193651  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:05.193661  644414 provision.go:84] configureAuth start
	I1115 11:10:05.193734  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:05.211992  644414 provision.go:143] copyHostCerts
	I1115 11:10:05.212041  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212076  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:05.212095  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212172  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:05.212264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212287  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:05.212292  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212324  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:05.212370  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212391  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:05.212398  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212423  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:05.212513  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113 san=[127.0.0.1 192.168.49.2 ha-439113 localhost minikube]
	I1115 11:10:06.070863  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:06.070938  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:06.071014  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.090345  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.196902  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:06.196968  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:06.216309  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:06.216383  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1115 11:10:06.234832  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:06.234898  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:06.252396  644414 provision.go:87] duration metric: took 1.058711326s to configureAuth
	I1115 11:10:06.252465  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:06.252742  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:06.252850  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.270036  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:06.270362  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:06.270383  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:06.614480  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:06.614501  644414 machine.go:97] duration metric: took 4.956644455s to provisionDockerMachine
	I1115 11:10:06.614512  644414 start.go:293] postStartSetup for "ha-439113" (driver="docker")
	I1115 11:10:06.614523  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:06.614593  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:06.614633  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.635190  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.741143  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:06.744492  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:06.744522  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:06.744534  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:06.744591  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:06.744682  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:06.744693  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:06.744792  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:06.752206  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:06.769623  644414 start.go:296] duration metric: took 155.096546ms for postStartSetup
	I1115 11:10:06.769735  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:06.769797  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.786747  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.889967  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:06.894381  644414 fix.go:56] duration metric: took 5.56869817s for fixHost
	I1115 11:10:06.894404  644414 start.go:83] releasing machines lock for "ha-439113", held for 5.568743749s
	I1115 11:10:06.894468  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:06.912478  644414 ssh_runner.go:195] Run: cat /version.json
	I1115 11:10:06.912503  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:06.912549  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.912557  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.935963  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.943189  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:07.140607  644414 ssh_runner.go:195] Run: systemctl --version
	I1115 11:10:07.147286  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:07.181632  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:07.186178  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:07.186315  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:07.194727  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:07.194754  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:07.194787  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:07.194836  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:07.211038  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:07.228463  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:07.228531  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:07.245230  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:07.259066  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:07.400677  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:07.528374  644414 docker.go:234] disabling docker service ...
	I1115 11:10:07.528452  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:07.544386  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:07.557994  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:07.673355  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:07.789554  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:07.802473  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:07.816520  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:07.816638  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.825590  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:07.825753  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.834624  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.843465  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.852151  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:07.860174  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.869179  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.877916  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.886986  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:07.894890  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:07.902588  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.022572  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:10:08.143861  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:10:08.144001  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:10:08.148082  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:10:08.148187  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:10:08.151776  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:10:08.176109  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:10:08.176190  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.206377  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.246152  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:10:08.249013  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:10:08.265246  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:10:08.269229  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.279381  644414 kubeadm.go:884] updating cluster {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:10:08.279538  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:08.279594  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.313662  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.313686  644414 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:10:08.313742  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.341156  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.341180  644414 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:10:08.341189  644414 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 11:10:08.341297  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:10:08.341383  644414 ssh_runner.go:195] Run: crio config
	I1115 11:10:08.417323  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:08.417346  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:08.417367  644414 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:10:08.417391  644414 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-439113 NodeName:ha-439113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:10:08.417529  644414 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-439113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:10:08.417554  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:10:08.417612  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:10:08.429604  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:08.429765  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:10:08.429836  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:10:08.437846  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:10:08.437927  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 11:10:08.445900  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 11:10:08.459668  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:10:08.472428  644414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1115 11:10:08.485415  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:10:08.498516  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:10:08.502240  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.512200  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.622281  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:10:08.654146  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.2
	I1115 11:10:08.654177  644414 certs.go:195] generating shared ca certs ...
	I1115 11:10:08.654195  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:08.654338  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:10:08.654393  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:10:08.654406  644414 certs.go:257] generating profile certs ...
	I1115 11:10:08.654496  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:10:08.654531  644414 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423
	I1115 11:10:08.654549  644414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1115 11:10:09.275584  644414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 ...
	I1115 11:10:09.275661  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423: {Name:mkcc7bf2bc49672369082197c2ea205c3b413e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.275872  644414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 ...
	I1115 11:10:09.275912  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423: {Name:mkddc44bc05ba35828280547efe210b00108cabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.276063  644414 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 11:10:09.276243  644414 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 11:10:09.276437  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:10:09.276473  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:10:09.276509  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:10:09.276554  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:10:09.276590  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:10:09.276617  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:10:09.276659  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:10:09.276698  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:10:09.276726  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:10:09.276806  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:10:09.276885  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:10:09.276915  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:10:09.276959  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:10:09.277013  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:10:09.277057  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:10:09.277153  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:09.277220  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.277264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.277297  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.277887  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:10:09.296564  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:10:09.314781  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:10:09.335633  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:10:09.353146  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:10:09.370859  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:10:09.388232  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:10:09.410774  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:10:09.439944  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:10:09.477014  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:10:09.526226  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:10:09.559717  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:10:09.610930  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:10:09.623460  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:10:09.643972  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.652807  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.653014  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.741237  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:10:09.749901  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:10:09.767184  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774726  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774846  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.838136  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:10:09.846476  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:10:09.890099  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895038  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895102  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.961757  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:10:09.976918  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:10:09.985687  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:10:10.033177  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:10:10.079291  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:10:10.125057  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:10:10.168941  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:10:10.219261  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:10:10.289307  644414 kubeadm.go:401] StartCluster: {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:10.289486  644414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:10:10.289574  644414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:10:10.354477  644414 cri.go:89] found id: "ab0d0c34b46d585c39a39112a9d96382b3c2d54b036b01e5aabb4c9adb26fe48"
	I1115 11:10:10.354514  644414 cri.go:89] found id: "f5462600e253c742d103a09b518cadafb5354c9b674147e2394344fc4f6cdd17"
	I1115 11:10:10.354519  644414 cri.go:89] found id: "c9aa769ac1e410d0690ad31ea1ef812bb7de4c70e937d471392caf66737a2862"
	I1115 11:10:10.354523  644414 cri.go:89] found id: "49f53dedd4e32694c1de85010bf005f40b10dfe1e581005787ce4f5229936764"
	I1115 11:10:10.354526  644414 cri.go:89] found id: "e0b918dd4970fd4deab2473f719156caad36c70e91836ec9407fd62c0e66c2f1"
	I1115 11:10:10.354530  644414 cri.go:89] found id: ""
	I1115 11:10:10.354587  644414 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:10:10.370661  644414 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:10:10Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:10:10.370748  644414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:10:10.382258  644414 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:10:10.382296  644414 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:10:10.382347  644414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:10:10.390626  644414 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:10.391102  644414 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-439113" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.391230  644414 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "ha-439113" cluster setting kubeconfig missing "ha-439113" context setting]
	I1115 11:10:10.391547  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.392161  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:10:10.393236  644414 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 11:10:10.393317  644414 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 11:10:10.393332  644414 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 11:10:10.393338  644414 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 11:10:10.393347  644414 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 11:10:10.393352  644414 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 11:10:10.394951  644414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:10:10.405841  644414 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1115 11:10:10.405873  644414 kubeadm.go:602] duration metric: took 23.570972ms to restartPrimaryControlPlane
	I1115 11:10:10.405883  644414 kubeadm.go:403] duration metric: took 116.586705ms to StartCluster
	I1115 11:10:10.405898  644414 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.405969  644414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.406686  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.406905  644414 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:10:10.406942  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:10:10.406961  644414 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:10:10.407533  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.412935  644414 out.go:179] * Enabled addons: 
	I1115 11:10:10.415804  644414 addons.go:515] duration metric: took 8.829529ms for enable addons: enabled=[]
	I1115 11:10:10.415842  644414 start.go:247] waiting for cluster config update ...
	I1115 11:10:10.415858  644414 start.go:256] writing updated cluster config ...
	I1115 11:10:10.419060  644414 out.go:203] 
	I1115 11:10:10.422348  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.422466  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.425867  644414 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	I1115 11:10:10.428658  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:10.431470  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:10.434231  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:10.434251  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:10.434373  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:10.434390  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:10.434509  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.434718  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:10.459579  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:10.459605  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:10.459619  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:10.459645  644414 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:10.459703  644414 start.go:364] duration metric: took 38.917µs to acquireMachinesLock for "ha-439113-m02"
	I1115 11:10:10.459726  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:10.459732  644414 fix.go:54] fixHost starting: m02
	I1115 11:10:10.460001  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.490667  644414 fix.go:112] recreateIfNeeded on ha-439113-m02: state=Stopped err=<nil>
	W1115 11:10:10.490698  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:10.494022  644414 out.go:252] * Restarting existing docker container for "ha-439113-m02" ...
	I1115 11:10:10.494103  644414 cli_runner.go:164] Run: docker start ha-439113-m02
	I1115 11:10:10.848234  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.876991  644414 kic.go:430] container "ha-439113-m02" state is running.
	I1115 11:10:10.877372  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:10.907598  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.907880  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:10.907948  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:10.946130  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:10.946438  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:10.946448  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:10.947277  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60346->127.0.0.1:33574: read: connection reset by peer
	I1115 11:10:14.161070  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.161137  644414 ubuntu.go:182] provisioning hostname "ha-439113-m02"
	I1115 11:10:14.161234  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.193112  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.193410  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.193421  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
	I1115 11:10:14.414884  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.415071  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.441593  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.441897  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.441920  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:14.655329  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:14.655419  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:14.655450  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:14.655485  644414 provision.go:84] configureAuth start
	I1115 11:10:14.655584  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:14.684954  644414 provision.go:143] copyHostCerts
	I1115 11:10:14.684996  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685029  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:14.685035  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685109  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:14.685187  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685203  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:14.685208  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685233  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:14.685270  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685286  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:14.685290  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685314  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:14.685358  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
	I1115 11:10:15.164962  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:15.165087  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:15.165161  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.183565  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:15.309845  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:15.309910  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:15.352565  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:15.352638  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:10:15.389073  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:15.389137  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:15.436657  644414 provision.go:87] duration metric: took 781.140009ms to configureAuth
	I1115 11:10:15.436685  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:15.436943  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:15.437049  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.467485  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:15.467817  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:15.467839  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:16.972469  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:16.972493  644414 machine.go:97] duration metric: took 6.064595432s to provisionDockerMachine
	I1115 11:10:16.972505  644414 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
	I1115 11:10:16.972515  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:16.972579  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:16.972636  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.011353  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.141531  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:17.145724  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:17.145750  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:17.145761  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:17.145819  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:17.145893  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:17.145901  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:17.146000  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:17.153864  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:17.175408  644414 start.go:296] duration metric: took 202.888277ms for postStartSetup
	I1115 11:10:17.175529  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:17.175603  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.202540  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.314494  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:17.322089  644414 fix.go:56] duration metric: took 6.862349383s for fixHost
	I1115 11:10:17.322116  644414 start.go:83] releasing machines lock for "ha-439113-m02", held for 6.862399853s
	I1115 11:10:17.322193  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:17.346984  644414 out.go:179] * Found network options:
	I1115 11:10:17.349992  644414 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 11:10:17.357013  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:10:17.357074  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:10:17.357145  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:17.357204  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.357473  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:17.357528  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.392713  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.393588  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.599074  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:17.766809  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:17.766905  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:17.789163  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:17.789191  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:17.789231  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:17.789289  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:17.815110  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:17.838070  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:17.838143  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:17.860257  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:17.879590  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:18.110145  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:18.361820  644414 docker.go:234] disabling docker service ...
	I1115 11:10:18.361900  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:18.384569  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:18.416731  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:18.641786  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:18.837399  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:18.857492  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:18.878074  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:18.878149  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.894400  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:18.894493  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.905139  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.919066  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.934192  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:18.947793  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.962215  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.975913  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.990422  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:19.001078  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:19.010948  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:19.243052  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:11:49.588377  644414 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345288768s)
	I1115 11:11:49.588399  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:11:49.588453  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:11:49.592631  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:11:49.592694  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:11:49.596673  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:11:49.627565  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:11:49.627655  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.657574  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.692786  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:11:49.695732  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:11:49.698667  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:11:49.715635  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:11:49.719827  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:49.729557  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:11:49.729790  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:49.730057  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:11:49.747197  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:11:49.747477  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
	I1115 11:11:49.747492  644414 certs.go:195] generating shared ca certs ...
	I1115 11:11:49.747509  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:11:49.747651  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:11:49.747712  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:11:49.747723  644414 certs.go:257] generating profile certs ...
	I1115 11:11:49.747793  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:11:49.747854  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8
	I1115 11:11:49.747896  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:11:49.747908  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:11:49.747922  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:11:49.747939  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:11:49.747953  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:11:49.747968  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:11:49.747979  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:11:49.747995  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:11:49.748005  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:11:49.748058  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:11:49.748100  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:11:49.748113  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:11:49.748139  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:11:49.748172  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:11:49.748196  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:11:49.748244  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:11:49.748274  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:11:49.748290  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:11:49.748302  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:49.748361  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:11:49.766640  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:11:49.865171  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 11:11:49.869248  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 11:11:49.877385  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 11:11:49.881661  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 11:11:49.890592  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 11:11:49.894372  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 11:11:49.902879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 11:11:49.906594  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 11:11:49.914879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 11:11:49.918911  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 11:11:49.928251  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 11:11:49.931713  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 11:11:49.939808  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:11:49.959417  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:11:49.979171  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:11:49.999374  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:11:50.034447  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:11:50.055956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:11:50.075858  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:11:50.096569  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:11:50.123534  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:11:50.145099  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:11:50.165838  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:11:50.187631  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 11:11:50.201727  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 11:11:50.215561  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 11:11:50.228704  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 11:11:50.243716  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 11:11:50.256646  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 11:11:50.274083  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 11:11:50.289451  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:11:50.296096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:11:50.304816  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308605  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308696  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.349933  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:11:50.357859  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:11:50.366131  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370090  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370184  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.411529  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:11:50.419530  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:11:50.428122  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.431990  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.432078  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.473336  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:11:50.481905  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:11:50.485884  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:11:50.529145  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:11:50.575458  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:11:50.618147  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:11:50.660345  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:11:50.701441  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:11:50.742918  644414 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 11:11:50.743050  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:11:50.743086  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:11:50.743137  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:11:50.756533  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:11:50.756661  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:11:50.756809  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:11:50.766452  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:11:50.766519  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 11:11:50.774299  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:11:50.787555  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:11:50.801348  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:11:50.815426  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:11:50.819361  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:50.829846  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:50.971817  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:50.986595  644414 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:11:50.987008  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:50.990541  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:11:50.993289  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:51.129111  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:51.143975  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:11:51.144052  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:11:51.144377  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175109  644414 node_ready.go:49] node "ha-439113-m02" is "Ready"
	I1115 11:11:54.175142  644414 node_ready.go:38] duration metric: took 3.030741263s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175156  644414 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:11:54.175217  644414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:11:54.191139  644414 api_server.go:72] duration metric: took 3.204498804s to wait for apiserver process to appear ...
	I1115 11:11:54.191165  644414 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:11:54.191183  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.270987  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 11:11:54.271020  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 11:11:54.691298  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.702970  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:54.703005  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.191248  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.208784  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.208820  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.691283  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.701010  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.701040  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.191695  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.205744  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:56.205779  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.691307  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.703521  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 11:11:56.706435  644414 api_server.go:141] control plane version: v1.34.1
	I1115 11:11:56.706475  644414 api_server.go:131] duration metric: took 2.515302396s to wait for apiserver health ...
	I1115 11:11:56.706484  644414 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:11:56.718211  644414 system_pods.go:59] 26 kube-system pods found
	I1115 11:11:56.718249  644414 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718259  644414 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718265  644414 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.718282  644414 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.718287  644414 system_pods.go:61] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.718291  644414 system_pods.go:61] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.718295  644414 system_pods.go:61] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.718299  644414 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.718305  644414 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.718316  644414 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.718322  644414 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.718327  644414 system_pods.go:61] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.718337  644414 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.718352  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.718361  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.718366  644414 system_pods.go:61] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.718373  644414 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.718384  644414 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.718389  644414 system_pods.go:61] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.718395  644414 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.718405  644414 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.718410  644414 system_pods.go:61] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.718414  644414 system_pods.go:61] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.718426  644414 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.718432  644414 system_pods.go:61] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.718438  644414 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.718444  644414 system_pods.go:74] duration metric: took 11.954415ms to wait for pod list to return data ...
	I1115 11:11:56.718453  644414 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:11:56.724493  644414 default_sa.go:45] found service account: "default"
	I1115 11:11:56.724536  644414 default_sa.go:55] duration metric: took 6.072136ms for default service account to be created ...
	I1115 11:11:56.724547  644414 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:11:56.819602  644414 system_pods.go:86] 26 kube-system pods found
	I1115 11:11:56.819647  644414 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819658  644414 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819664  644414 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.819670  644414 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.819674  644414 system_pods.go:89] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.819679  644414 system_pods.go:89] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.819694  644414 system_pods.go:89] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.819703  644414 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.819711  644414 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.819721  644414 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.819726  644414 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.819730  644414 system_pods.go:89] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.819738  644414 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.819747  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.819752  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.819756  644414 system_pods.go:89] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.819770  644414 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.819778  644414 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.819783  644414 system_pods.go:89] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.819789  644414 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.819797  644414 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.819803  644414 system_pods.go:89] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.819811  644414 system_pods.go:89] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.819815  644414 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.819819  644414 system_pods.go:89] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.819824  644414 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.819841  644414 system_pods.go:126] duration metric: took 95.282586ms to wait for k8s-apps to be running ...
	I1115 11:11:56.819854  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:11:56.819918  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:11:56.837030  644414 system_svc.go:56] duration metric: took 17.155047ms WaitForService to wait for kubelet
	I1115 11:11:56.837061  644414 kubeadm.go:587] duration metric: took 5.85042521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:11:56.837082  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:11:56.841207  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841239  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841253  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841257  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841262  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841265  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841282  644414 node_conditions.go:105] duration metric: took 4.194343ms to run NodePressure ...
	I1115 11:11:56.841300  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:11:56.841324  644414 start.go:256] writing updated cluster config ...
	I1115 11:11:56.844944  644414 out.go:203] 
	I1115 11:11:56.848069  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:56.848191  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.851562  644414 out.go:179] * Starting "ha-439113-m04" worker node in "ha-439113" cluster
	I1115 11:11:56.855417  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:11:56.858314  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:11:56.861196  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:11:56.861243  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:11:56.861453  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:11:56.861539  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:11:56.861554  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:11:56.861725  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.894239  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:11:56.894262  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:11:56.894277  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:11:56.894301  644414 start.go:360] acquireMachinesLock for ha-439113-m04: {Name:mke6e857e5b25fb7a1d96f7fe08934c7b44258f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:11:56.894360  644414 start.go:364] duration metric: took 38.252µs to acquireMachinesLock for "ha-439113-m04"
	I1115 11:11:56.894384  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:11:56.894391  644414 fix.go:54] fixHost starting: m04
	I1115 11:11:56.894639  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:56.934538  644414 fix.go:112] recreateIfNeeded on ha-439113-m04: state=Stopped err=<nil>
	W1115 11:11:56.934571  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:11:56.937723  644414 out.go:252] * Restarting existing docker container for "ha-439113-m04" ...
	I1115 11:11:56.937813  644414 cli_runner.go:164] Run: docker start ha-439113-m04
	I1115 11:11:57.292353  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:57.320590  644414 kic.go:430] container "ha-439113-m04" state is running.
	I1115 11:11:57.320978  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:11:57.343942  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:57.344181  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:11:57.344243  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:11:57.365933  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:11:57.366241  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:11:57.366255  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:11:57.366995  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:12:00.666212  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.666285  644414 ubuntu.go:182] provisioning hostname "ha-439113-m04"
	I1115 11:12:00.666399  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.703141  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.703457  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.703468  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m04 && echo "ha-439113-m04" | sudo tee /etc/hostname
	I1115 11:12:00.898855  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.898950  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.948730  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.949093  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.949120  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:12:01.162002  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:12:01.162071  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:12:01.162106  644414 ubuntu.go:190] setting up certificates
	I1115 11:12:01.162147  644414 provision.go:84] configureAuth start
	I1115 11:12:01.162228  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:01.189297  644414 provision.go:143] copyHostCerts
	I1115 11:12:01.189345  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189381  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:12:01.189387  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189469  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:12:01.189552  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189569  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:12:01.189574  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189602  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:12:01.189643  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189658  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:12:01.189662  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189686  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:12:01.189732  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m04 san=[127.0.0.1 192.168.49.5 ha-439113-m04 localhost minikube]
	I1115 11:12:01.793644  644414 provision.go:177] copyRemoteCerts
	I1115 11:12:01.793724  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:12:01.793769  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:01.813786  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:01.932159  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:12:01.932221  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:12:01.959503  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:12:01.959565  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:12:01.985894  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:12:01.985956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:12:02.016893  644414 provision.go:87] duration metric: took 854.716001ms to configureAuth
	I1115 11:12:02.016972  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:12:02.017324  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:02.017494  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.042340  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:02.042641  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:02.042657  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:12:02.421793  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:12:02.421855  644414 machine.go:97] duration metric: took 5.077657106s to provisionDockerMachine
	I1115 11:12:02.421891  644414 start.go:293] postStartSetup for "ha-439113-m04" (driver="docker")
	I1115 11:12:02.421937  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:12:02.422045  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:12:02.422113  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.441735  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.549972  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:12:02.553292  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:12:02.553326  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:12:02.553339  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:12:02.553398  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:12:02.553481  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:12:02.553492  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:12:02.553591  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:12:02.561640  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:02.581188  644414 start.go:296] duration metric: took 159.246745ms for postStartSetup
	I1115 11:12:02.581283  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:12:02.581334  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.598560  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.702117  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:12:02.707693  644414 fix.go:56] duration metric: took 5.813294693s for fixHost
	I1115 11:12:02.707719  644414 start.go:83] releasing machines lock for "ha-439113-m04", held for 5.813345581s
	I1115 11:12:02.707815  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:02.727805  644414 out.go:179] * Found network options:
	I1115 11:12:02.730701  644414 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 11:12:02.733528  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733564  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733599  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733615  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:12:02.733685  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:12:02.733735  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.734056  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:12:02.734115  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.762180  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.770444  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.906742  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:12:02.982777  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:12:02.982870  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:12:02.991311  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:12:02.991334  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:12:02.991372  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:12:02.991426  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:12:03.010259  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:12:03.026209  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:12:03.026295  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:12:03.042235  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:12:03.056541  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:12:03.207440  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:12:03.335536  644414 docker.go:234] disabling docker service ...
	I1115 11:12:03.335651  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:12:03.353883  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:12:03.369431  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:12:03.486211  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:12:03.610710  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:12:03.625360  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:12:03.641312  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:12:03.641378  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.651264  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:12:03.651338  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.665109  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.675589  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.686503  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:12:03.694865  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.705871  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.714726  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.723852  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:12:03.731853  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:12:03.740511  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:03.853255  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:12:04.003040  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:12:04.003163  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:12:04.007573  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:12:04.007728  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:12:04.014385  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:12:04.042291  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:12:04.042400  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.076162  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.110265  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:12:04.113250  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:12:04.116130  644414 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 11:12:04.118985  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:12:04.135746  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:12:04.140419  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.151141  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:12:04.151383  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.151632  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:12:04.169829  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:12:04.170121  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.5
	I1115 11:12:04.170137  644414 certs.go:195] generating shared ca certs ...
	I1115 11:12:04.170152  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:12:04.170287  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:12:04.170332  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:12:04.170347  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:12:04.170362  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:12:04.170377  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:12:04.170392  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:12:04.170455  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:12:04.170489  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:12:04.170502  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:12:04.170528  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:12:04.170554  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:12:04.170579  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:12:04.170625  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:04.170653  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.170666  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.170682  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.170703  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:12:04.192999  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:12:04.214491  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:12:04.238386  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:12:04.261791  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:12:04.282186  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:12:04.301663  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:12:04.323494  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:12:04.330506  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:12:04.339641  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343359  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343471  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.384944  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:12:04.393726  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:12:04.401885  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405917  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405984  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.448096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:12:04.456341  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:12:04.464809  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469548  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469657  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.512809  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:12:04.521564  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:12:04.525477  644414 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:12:04.525571  644414 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1115 11:12:04.525671  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:12:04.525750  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:12:04.534631  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:12:04.534732  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1115 11:12:04.542762  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:12:04.555474  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:12:04.568549  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:12:04.572246  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.582645  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.720397  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.734431  644414 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1115 11:12:04.734793  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.737605  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:12:04.740524  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.870273  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.886167  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:12:04.886294  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:12:04.886567  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890505  644414 node_ready.go:49] node "ha-439113-m04" is "Ready"
	I1115 11:12:04.890532  644414 node_ready.go:38] duration metric: took 3.920221ms for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890569  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:12:04.890627  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:12:04.906249  644414 system_svc.go:56] duration metric: took 15.693042ms WaitForService to wait for kubelet
	I1115 11:12:04.906349  644414 kubeadm.go:587] duration metric: took 171.724556ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:12:04.906397  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:12:04.916259  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916376  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916421  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916457  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916477  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916512  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916538  644414 node_conditions.go:105] duration metric: took 10.120472ms to run NodePressure ...
	I1115 11:12:04.916592  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:12:04.916629  644414 start.go:256] writing updated cluster config ...
	I1115 11:12:04.917071  644414 ssh_runner.go:195] Run: rm -f paused
	I1115 11:12:04.922331  644414 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:12:04.922989  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:12:04.955742  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:12:06.963336  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:08.980310  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:11.479328  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:13.964446  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:16.463626  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:18.465383  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:20.962686  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:22.964048  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:24.966447  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:27.463942  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:29.466713  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	I1115 11:12:30.462795  644414 pod_ready.go:94] pod "coredns-66bc5c9577-4g6sm" is "Ready"
	I1115 11:12:30.462820  644414 pod_ready.go:86] duration metric: took 25.506978071s for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.462830  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.469415  644414 pod_ready.go:94] pod "coredns-66bc5c9577-mlm6m" is "Ready"
	I1115 11:12:30.469441  644414 pod_ready.go:86] duration metric: took 6.60411ms for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.473231  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480070  644414 pod_ready.go:94] pod "etcd-ha-439113" is "Ready"
	I1115 11:12:30.480096  644414 pod_ready.go:86] duration metric: took 6.837381ms for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480106  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486550  644414 pod_ready.go:94] pod "etcd-ha-439113-m02" is "Ready"
	I1115 11:12:30.486578  644414 pod_ready.go:86] duration metric: took 6.465838ms for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486589  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.657170  644414 request.go:683] "Waited before sending request" delay="167.271906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:30.660251  644414 pod_ready.go:99] pod "etcd-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "etcd-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:30.660271  644414 pod_ready.go:86] duration metric: took 173.674417ms for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.856532  644414 request.go:683] "Waited before sending request" delay="196.157902ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 11:12:30.862230  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.056631  644414 request.go:683] "Waited before sending request" delay="194.303781ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113"
	I1115 11:12:31.256567  644414 request.go:683] "Waited before sending request" delay="196.320457ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:31.260364  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113" is "Ready"
	I1115 11:12:31.260440  644414 pod_ready.go:86] duration metric: took 398.184225ms for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.260460  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.456733  644414 request.go:683] "Waited before sending request" delay="196.195936ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m02"
	I1115 11:12:31.657283  644414 request.go:683] "Waited before sending request" delay="189.364553ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:31.669486  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113-m02" is "Ready"
	I1115 11:12:31.669527  644414 pod_ready.go:86] duration metric: took 409.053455ms for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.669545  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.856759  644414 request.go:683] "Waited before sending request" delay="187.140315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m03"
	I1115 11:12:32.057081  644414 request.go:683] "Waited before sending request" delay="194.340659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:32.060246  644414 pod_ready.go:99] pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "kube-apiserver-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:32.060269  644414 pod_ready.go:86] duration metric: took 390.716754ms for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.256765  644414 request.go:683] "Waited before sending request" delay="196.346784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 11:12:32.260967  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.457411  644414 request.go:683] "Waited before sending request" delay="196.343854ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:32.656543  644414 request.go:683] "Waited before sending request" delay="195.259075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:32.857312  644414 request.go:683] "Waited before sending request" delay="95.237723ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:33.056759  644414 request.go:683] "Waited before sending request" delay="193.348543ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.456512  644414 request.go:683] "Waited before sending request" delay="191.213474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.857248  644414 request.go:683] "Waited before sending request" delay="92.163849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	W1115 11:12:34.268915  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:36.769187  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:38.769594  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:40.775431  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:43.268655  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	I1115 11:12:45.275032  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113" is "Ready"
	I1115 11:12:45.275075  644414 pod_ready.go:86] duration metric: took 13.01407493s for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.275087  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305482  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m02" is "Ready"
	I1115 11:12:45.305509  644414 pod_ready.go:86] duration metric: took 30.414418ms for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305520  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.308592  644414 pod_ready.go:99] pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace is gone: getting pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace (will retry): pods "kube-controller-manager-ha-439113-m03" not found
	I1115 11:12:45.308616  644414 pod_ready.go:86] duration metric: took 3.088777ms for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.312595  644414 pod_ready.go:83] waiting for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319584  644414 pod_ready.go:94] pod "kube-proxy-2fgtm" is "Ready"
	I1115 11:12:45.319658  644414 pod_ready.go:86] duration metric: took 6.96691ms for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319684  644414 pod_ready.go:83] waiting for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333364  644414 pod_ready.go:94] pod "kube-proxy-k7bcn" is "Ready"
	I1115 11:12:45.333446  644414 pod_ready.go:86] duration metric: took 13.743575ms for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333472  644414 pod_ready.go:83] waiting for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.461841  644414 request.go:683] "Waited before sending request" delay="128.26876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgftx"
	I1115 11:12:45.662133  644414 request.go:683] "Waited before sending request" delay="196.336603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:45.666231  644414 pod_ready.go:94] pod "kube-proxy-kgftx" is "Ready"
	I1115 11:12:45.666259  644414 pod_ready.go:86] duration metric: took 332.766862ms for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.862402  644414 request.go:683] "Waited before sending request" delay="196.047882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1115 11:12:45.868100  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.061503  644414 request.go:683] "Waited before sending request" delay="193.299208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113"
	I1115 11:12:46.262349  644414 request.go:683] "Waited before sending request" delay="196.337092ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:46.266390  644414 pod_ready.go:94] pod "kube-scheduler-ha-439113" is "Ready"
	I1115 11:12:46.266415  644414 pod_ready.go:86] duration metric: took 398.289218ms for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.266426  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.461857  644414 request.go:683] "Waited before sending request" delay="195.354736ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:46.662164  644414 request.go:683] "Waited before sending request" delay="196.315389ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:46.862451  644414 request.go:683] "Waited before sending request" delay="95.198714ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:47.062064  644414 request.go:683] "Waited before sending request" delay="194.32444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.462004  644414 request.go:683] "Waited before sending request" delay="191.259764ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.862129  644414 request.go:683] "Waited before sending request" delay="91.206426ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	W1115 11:12:48.273067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:50.273503  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:52.273873  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:54.774253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:56.774741  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:59.273054  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:01.273531  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:03.274007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:05.773995  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:08.274070  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:10.774950  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:13.273142  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:15.774523  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:18.275146  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:20.775066  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:23.273644  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:25.772983  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:27.773086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:29.774439  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:32.274282  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:34.773274  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:36.774007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:38.774499  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:41.272920  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:43.272980  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:45.290069  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:47.774370  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:49.775099  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:52.273471  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:54.774040  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:56.776828  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:58.777477  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:01.274086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:03.774603  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:06.274270  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:08.776333  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:11.274406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:13.775288  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:16.274470  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:18.774609  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:21.275329  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:23.773704  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:25.781356  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:28.273802  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:30.773867  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:33.273730  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:35.274388  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:37.774988  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:40.273650  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:42.274574  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:44.775136  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:47.273253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:49.774129  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:52.274209  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:54.773957  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:56.774057  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:58.774103  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:00.794798  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:03.273466  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:05.274892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:07.773906  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:09.775150  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:12.274372  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:14.773892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:16.774210  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:19.273576  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:21.773796  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:24.273997  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:26.274175  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:28.775134  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:31.275044  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:33.773408  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:35.774067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:37.774322  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:40.273391  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:42.275088  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:44.773835  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:46.773944  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:49.273345  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:51.274206  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:53.275406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:55.276298  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:57.773509  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:59.773622  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:01.773991  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:04.273687  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	I1115 11:16:04.922792  644414 pod_ready.go:86] duration metric: took 3m18.656348919s for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:16:04.922828  644414 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1115 11:16:04.922844  644414 pod_ready.go:40] duration metric: took 4m0.000432421s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:16:04.926118  644414 out.go:203] 
	W1115 11:16:04.928902  644414 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1115 11:16:04.931693  644414 out.go:203] 
	
	
	==> CRI-O <==
	Nov 15 11:12:27 ha-439113 crio[666]: time="2025-11-15T11:12:27.920544626Z" level=info msg="Started container" PID=1433 containerID=45eb4921c003b25c5119ab01196399bab3eb8157fb07652ba3dcd97194afeb00 description=kube-system/kube-controller-manager-ha-439113/kube-controller-manager id=fa832c19-eb18-47af-80d3-4790cad3225e name=/runtime.v1.RuntimeService/StartContainer sandboxID=21e90ac59d7247826fca1e350ef4c6d641540ffb41065bb8d5e3136341a1f7e4
	Nov 15 11:12:28 ha-439113 conmon[1137]: conmon d86466a64c1754474a32 <ninfo>: container 1142 exited with status 1
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.303366553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=776f7c67-301a-4655-9f1e-c0f4d2b6bdaf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.306045894Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01b7c975-ef4d-4609-85fa-e323353431bd name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.308511994Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8f6d110a-f199-4160-b315-87aac4712b71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.308610668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.319769952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320004347Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1658f23bf43e3861272003631cb2125f6cd69132a0a16a46de920e7b647021eb/merged/etc/passwd: no such file or directory"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320027059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1658f23bf43e3861272003631cb2125f6cd69132a0a16a46de920e7b647021eb/merged/etc/group: no such file or directory"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320305901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.388496736Z" level=info msg="Created container 4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68: kube-system/storage-provisioner/storage-provisioner" id=8f6d110a-f199-4160-b315-87aac4712b71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.38961912Z" level=info msg="Starting container: 4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68" id=bfef2a5f-46f3-44e9-9266-3ac15c3e2f60 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.393175299Z" level=info msg="Started container" PID=1445 containerID=4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68 description=kube-system/storage-provisioner/storage-provisioner id=bfef2a5f-46f3-44e9-9266-3ac15c3e2f60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94d3e897f0476e4f3abaa049d7990fde57c5406c8c5bb70e73a7146a92b5c99a
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.422814838Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.426273738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.426311481Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.42633375Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.435633901Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.43567025Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.435692969Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443292786Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443437303Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443463231Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.447544594Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.447580648Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	4307de9c87d36       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       4                   94d3e897f0476       storage-provisioner                 kube-system
	45eb4921c003b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Running             kube-controller-manager   6                   21e90ac59d724       kube-controller-manager-ha-439113   kube-system
	56ca04edf5389       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   2                   b9f35a414830a       busybox-7b57f96db7-vddcm            default
	16ebc70b03ad3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 minutes ago       Running             kube-proxy                2                   dbf5fcdbf92d1       kube-proxy-k7bcn                    kube-system
	ff8f6f3f30d64       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   d43213c9afa20       coredns-66bc5c9577-mlm6m            kube-system
	66d3cca12da72       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   8504950f9102e       coredns-66bc5c9577-4g6sm            kube-system
	624e9c4484de9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               2                   02b3165dd3170       kindnet-q4kpj                       kube-system
	d86466a64c175       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       3                   94d3e897f0476       storage-provisioner                 kube-system
	be71898116747       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   5                   21e90ac59d724       kube-controller-manager-ha-439113   kube-system
	d24d48c3f9b01       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   4 minutes ago       Running             kube-apiserver            3                   80d29a5d57c81       kube-apiserver-ha-439113            kube-system
	ab0d0c34b46d5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      2                   e3e01caa47fdb       etcd-ha-439113                      kube-system
	f5462600e253c       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  2                   c0b629ba4b9ea       kube-vip-ha-439113                  kube-system
	c9aa769ac1e41       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            2                   80d29a5d57c81       kube-apiserver-ha-439113            kube-system
	e0b918dd4970f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            2                   1552e5cdb042a       kube-scheduler-ha-439113            kube-system
	
	
	==> coredns [66d3cca12da72808d1018e1a6ec972546fda6374c31dd377d5d8dc684e2ceb3e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34700 - 4439 "HINFO IN 6986068788273380099.6825403624280059219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030217966s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ff8f6f3f30d64dbd44181797a52d66d21ee28c0ae7639d5d1bdbffd3052c24be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40461 - 514 "HINFO IN 2475121785806463085.1107501801826590384. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005830505s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-439113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_52_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:52:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:16:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:15:59 +0000   Sat, 15 Nov 2025 11:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-439113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6518a9f9-bb2d-42ae-b78a-3db01b5306a4
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vddcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-4g6sm             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 coredns-66bc5c9577-mlm6m             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 etcd-ha-439113                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         23m
	  kube-system                 kindnet-q4kpj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-439113             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-439113    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-k7bcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-439113             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-439113                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m7s                   kube-proxy       
	  Normal   Starting                 8m                     kube-proxy       
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   NodeHasSufficientPID     23m                    kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   Starting                 23m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 23m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  23m                    kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m                    kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeReady                22m                    kubelet          Node ha-439113 status is now: NodeReady
	  Normal   RegisteredNode           21m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeHasNoDiskPressure    8m29s (x8 over 8m29s)  kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m29s (x8 over 8m29s)  kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     8m29s (x8 over 8m29s)  kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m3s                   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           7m12s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeHasSufficientMemory  6m1s (x8 over 6m1s)    kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m1s (x8 over 6m1s)    kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m1s (x8 over 6m1s)    kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           3m38s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	
	
	Name:               ha-439113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_53_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:53:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:16:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:15:51 +0000   Sat, 15 Nov 2025 11:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-439113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d3455c64-e9a7-4ebe-b716-3cc9dc8ab51a
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6x277                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 etcd-ha-439113-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-mcj42                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-439113-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-439113-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-kgftx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-439113-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-439113-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 22m                    kube-proxy       
	  Normal   Starting                 3m34s                  kube-proxy       
	  Normal   Starting                 7m39s                  kube-proxy       
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           21m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   NodeNotReady             17m                    node-controller  Node ha-439113-m02 status is now: NodeNotReady
	  Normal   Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m25s (x8 over 8m25s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node ha-439113-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           8m3s                   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           7m12s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   Starting                 5m57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet          Node ha-439113-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m57s (x8 over 5m57s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        4m57s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           3m38s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	
	
	Name:               ha-439113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_56_52_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:56:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:16:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:16:05 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-439113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                bf4456d3-e8dc-4a97-8e4f-cb829c9a4b90
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-trswm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-4k2k2               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-proxy-2fgtm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m59s                  kube-proxy       
	  Normal   Starting                 3m37s                  kube-proxy       
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   Starting                 19m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           19m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasSufficientPID     19m (x3 over 19m)      kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  19m (x3 over 19m)      kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x3 over 19m)      kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           19m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeReady                18m                    kubelet          Node ha-439113-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m3s                   node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   Starting                 7m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m20s (x8 over 7m23s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m20s (x8 over 7m23s)  kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m20s (x8 over 7m23s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             7m13s                  node-controller  Node ha-439113-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           7m12s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Warning  CgroupV1                 4m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasSufficientMemory  4m8s (x8 over 4m11s)   kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m8s (x8 over 4m11s)   kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m8s (x8 over 4m11s)   kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m38s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[Nov15 10:39] overlayfs: idmapped layers are currently not supported
	[Nov15 10:52] overlayfs: idmapped layers are currently not supported
	[Nov15 10:53] overlayfs: idmapped layers are currently not supported
	[Nov15 10:54] overlayfs: idmapped layers are currently not supported
	[Nov15 10:56] overlayfs: idmapped layers are currently not supported
	[Nov15 10:58] overlayfs: idmapped layers are currently not supported
	[Nov15 11:07] overlayfs: idmapped layers are currently not supported
	[  +3.621339] overlayfs: idmapped layers are currently not supported
	[Nov15 11:08] overlayfs: idmapped layers are currently not supported
	[Nov15 11:09] overlayfs: idmapped layers are currently not supported
	[Nov15 11:10] overlayfs: idmapped layers are currently not supported
	[  +3.526164] overlayfs: idmapped layers are currently not supported
	[Nov15 11:12] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab0d0c34b46d585c39a39112a9d96382b3c2d54b036b01e5aabb4c9adb26fe48] <==
	{"level":"warn","ts":"2025-11-15T11:11:53.896124Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.790585Z","time spent":"7.105534206s","remote":"127.0.0.1:33982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896135Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.790568Z","time spent":"7.105562742s","remote":"127.0.0.1:33412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896145Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.786656Z","time spent":"7.109486177s","remote":"127.0.0.1:33262","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896155Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.784378Z","time spent":"7.111774446s","remote":"127.0.0.1:33190","response type":"/etcdserverpb.KV/Range","request count":0,"request size":21,"response count":0,"response size":0,"request content":"key:\"/registry/secrets\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896356Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803214Z","time spent":"7.093138803s","remote":"127.0.0.1:33206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896367Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803180Z","time spent":"7.093183504s","remote":"127.0.0.1:33644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896378Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803030Z","time spent":"7.093345383s","remote":"127.0.0.1:33792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896390Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803134Z","time spent":"7.09325027s","remote":"127.0.0.1:33532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896420Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.784174Z","time spent":"7.11223919s","remote":"127.0.0.1:33404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" limit:10000 "}
	{"level":"info","ts":"2025-11-15T11:11:53.896435Z","caller":"traceutil/trace.go:172","msg":"trace[1793027741] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"7.098153671s","start":"2025-11-15T11:11:46.798274Z","end":"2025-11-15T11:11:53.896428Z","steps":["trace[1793027741] 'agreement among raft nodes before linearized reading'  (duration: 7.076120166s)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T11:11:53.896596Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803160Z","time spent":"7.09342672s","remote":"127.0.0.1:33862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896624Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.797673Z","time spent":"7.098946471s","remote":"127.0.0.1:33510","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" limit:500 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896636Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.797656Z","time spent":"7.098975567s","remote":"127.0.0.1:33584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":0,"request content":"key:\"/registry/ipaddresses\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896661Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.797642Z","time spent":"7.099002897s","remote":"127.0.0.1:33340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":0,"response size":0,"request content":"key:\"/registry/minions/ha-439113\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.896674Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.804064Z","time spent":"7.092606129s","remote":"127.0.0.1:33412","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897422Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803114Z","time spent":"7.094295237s","remote":"127.0.0.1:33808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattributesclasses\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897453Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803097Z","time spent":"7.094349005s","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897466Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803071Z","time spent":"7.094390744s","remote":"127.0.0.1:33870","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897881Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803052Z","time spent":"7.094816547s","remote":"127.0.0.1:33820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/ha-439113\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897906Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.802979Z","time spent":"7.094921983s","remote":"127.0.0.1:34108","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/resourceclaimtemplates/\" range_end:\"/registry/resourceclaimtemplates0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.897918Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.803012Z","time spent":"7.094902036s","remote":"127.0.0.1:33754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.895670Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T11:11:46.793133Z","time spent":"7.102533588s","remote":"127.0.0.1:34062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":27,"response count":0,"response size":0,"request content":"key:\"/registry/deviceclasses\" limit:1 "}
	{"level":"warn","ts":"2025-11-15T11:11:53.953288Z","caller":"etcdserver/v3_server.go:888","msg":"ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader","sent-request-id":8128041333002731821,"received-request-id":8128041333002731820}
	{"level":"info","ts":"2025-11-15T11:11:54.143241Z","caller":"traceutil/trace.go:172","msg":"trace[808255463] linearizableReadLoop","detail":"{readStateIndex:4470; appliedIndex:4470; }","duration":"172.978406ms","start":"2025-11-15T11:11:53.970246Z","end":"2025-11-15T11:11:54.143224Z","steps":["trace[808255463] 'read index received'  (duration: 172.965146ms)","trace[808255463] 'applied index is now lower than readState.Index'  (duration: 12.513µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T11:11:54.143416Z","caller":"traceutil/trace.go:172","msg":"trace[686608144] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3721; }","duration":"174.658102ms","start":"2025-11-15T11:11:53.968751Z","end":"2025-11-15T11:11:54.143410Z","steps":["trace[686608144] 'agreement among raft nodes before linearized reading'  (duration: 174.609536ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:16:09 up  2:58,  0 user,  load average: 0.51, 1.08, 1.35
	Linux ha-439113 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [624e9c4484de9254bf51adb5f68cf3ee64fa67c57ec0731d0bf92706a6167a9c] <==
	I1115 11:15:28.421861       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:15:38.424929       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:15:38.424981       1 main.go:301] handling current node
	I1115 11:15:38.424998       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:15:38.425004       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:15:38.425206       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:15:38.425223       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:15:48.426519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:15:48.426554       1 main.go:301] handling current node
	I1115 11:15:48.426571       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:15:48.426577       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:15:48.428224       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:15:48.428258       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:15:58.425018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:15:58.425129       1 main.go:301] handling current node
	I1115 11:15:58.425169       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:15:58.425207       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:15:58.425384       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:15:58.425421       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:16:08.421653       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:16:08.421690       1 main.go:301] handling current node
	I1115 11:16:08.421707       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:16:08.421713       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:16:08.421875       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:16:08.421889       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c9aa769ac1e410d0690ad31ea1ef812bb7de4c70e937d471392caf66737a2862] <==
	{"level":"warn","ts":"2025-11-15T11:11:11.780145Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001588b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780169Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001d63860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780193Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780223Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780249Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780277Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40022752c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780304Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023e4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780333Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026c3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780359Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026c3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019be3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780406Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002bd4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780427Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002bd4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780448Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fe960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780469Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001798960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780496Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001798960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780520Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015881e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780543Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019bed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780567Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025c4d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780589Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ce5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780615Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ce5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780660Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780685Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400201a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1115 11:11:17.182112       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-11-15T11:11:17.353763Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400250af00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-apiserver [d24d48c3f9b01e8a715249be7330e6cfad6f59261b7723b5de70efa554928964] <==
	I1115 11:11:54.167816       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:11:54.174315       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:11:54.174482       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:11:54.197129       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 11:11:54.198171       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:11:54.225142       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 11:11:54.260659       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:11:54.275062       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:11:54.276988       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:11:54.298453       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:11:54.354535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1115 11:11:54.363129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1115 11:11:54.364714       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:11:54.378229       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:11:54.378262       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:11:54.378385       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:11:54.401493       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1115 11:11:54.415287       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1115 11:11:54.477155       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:11:54.477232       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:11:55.801917       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1115 11:11:56.437942       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1115 11:12:01.275927       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:12:31.830683       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 11:12:37.901647       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [45eb4921c003b25c5119ab01196399bab3eb8157fb07652ba3dcd97194afeb00] <==
	I1115 11:12:31.388664       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 11:12:31.392138       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:12:31.392251       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	I1115 11:12:31.393934       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:12:31.394231       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:12:31.405980       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 11:12:31.406062       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:12:31.406150       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:12:31.406185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 11:12:31.407984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 11:12:31.413011       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:12:31.413156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:12:31.418877       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 11:12:31.426163       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:12:31.428921       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 11:12:31.429031       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 11:12:31.429079       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:12:31.434013       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:12:31.441466       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:12:31.446524       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:12:31.449650       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:12:31.481074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:12:31.481105       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:12:31.481113       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:12:31.519964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41] <==
	I1115 11:11:30.275411       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:11:31.365181       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1115 11:11:31.365208       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:11:31.368367       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1115 11:11:31.370810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:11:31.370917       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1115 11:11:31.371086       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1115 11:11:41.387475       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [16ebc70b03ad38e3a7e5abff3cead02f628f4a722d181136401c1a8c416ae823] <==
	I1115 11:12:01.396280       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:12:01.491396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:12:01.592661       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:12:01.592701       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 11:12:01.592780       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:12:01.742121       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:12:01.742188       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:12:01.763218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:12:01.764138       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:12:01.764797       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:12:01.789051       1 config.go:200] "Starting service config controller"
	I1115 11:12:01.789146       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:12:01.789599       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:12:01.789660       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:12:01.789732       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:12:01.789761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:12:01.794216       1 config.go:309] "Starting node config controller"
	I1115 11:12:01.794306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:12:01.794337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:12:01.890300       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:12:01.890346       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:12:01.890389       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e0b918dd4970fd4deab2473f719156caad36c70e91836ec9407fd62c0e66c2f1] <==
	E1115 11:10:59.658870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:11:00.345811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:11:00.432409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:11:02.384472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:11:02.426048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:11:21.983897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:11:23.265022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:11:26.946829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:11:27.361077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:11:28.929218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:11:29.282135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:11:29.741098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:11:31.948528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:11:32.427201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:11:32.729768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:11:33.157818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:11:34.701567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:11:35.752287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:11:36.951331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:11:37.615660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:11:38.448988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:11:40.797158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:11:41.756113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:11:44.289532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1115 11:12:21.588534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844253     802 projected.go:196] Error preparing data for projected volume kube-api-access-sd5j8 for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844286     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8 podName:6a63ca66-7de2-40d8-96f0-a99da4ba3411 nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844277125 +0000 UTC m=+109.205504722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sd5j8" (UniqueName: "kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8") pod "storage-provisioner" (UID: "6a63ca66-7de2-40d8-96f0-a99da4ba3411") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844314     802 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844326     802 projected.go:196] Error preparing data for projected volume kube-api-access-5ghqb for pod default/busybox-7b57f96db7-vddcm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844354     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb podName:92adc10b-e910-45d1-8267-ee2e884d0dcc nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844345777 +0000 UTC m=+109.205573365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5ghqb" (UniqueName: "kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb") pod "busybox-7b57f96db7-vddcm" (UID: "92adc10b-e910-45d1-8267-ee2e884d0dcc") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844373     802 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844479     802 projected.go:196] Error preparing data for projected volume kube-api-access-b6xlh for pod kube-system/coredns-66bc5c9577-4g6sm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844521     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh podName:9460f377-28d8-418c-9dab-9428dfbfca1d nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844511856 +0000 UTC m=+109.205739445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b6xlh" (UniqueName: "kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh") pod "coredns-66bc5c9577-4g6sm" (UID: "9460f377-28d8-418c-9dab-9428dfbfca1d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:57 ha-439113 kubelet[802]: I1115 11:11:57.908131     802 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:11:58 ha-439113 kubelet[802]: W1115 11:11:58.358260     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04 WatchSource:0}: Error finding container 8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04: Status 404 returned error can't find the container with id 8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04
	Nov 15 11:11:58 ha-439113 kubelet[802]: W1115 11:11:58.418603     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280 WatchSource:0}: Error finding container d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280: Status 404 returned error can't find the container with id d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.705715     802 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.705866     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4718f104-1eea-4e92-b339-dc6ae067eee3-kube-proxy podName:4718f104-1eea-4e92-b339-dc6ae067eee3 nodeName:}" failed. No retries permitted until 2025-11-15 11:12:00.70583574 +0000 UTC m=+112.067063329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4718f104-1eea-4e92-b339-dc6ae067eee3-kube-proxy") pod "kube-proxy-k7bcn" (UID: "4718f104-1eea-4e92-b339-dc6ae067eee3") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911022     802 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911067     802 projected.go:196] Error preparing data for projected volume kube-api-access-5ghqb for pod default/busybox-7b57f96db7-vddcm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911165     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb podName:92adc10b-e910-45d1-8267-ee2e884d0dcc nodeName:}" failed. No retries permitted until 2025-11-15 11:12:00.91114076 +0000 UTC m=+112.272368357 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5ghqb" (UniqueName: "kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb") pod "busybox-7b57f96db7-vddcm" (UID: "92adc10b-e910-45d1-8267-ee2e884d0dcc") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:12:00 ha-439113 kubelet[802]: I1115 11:12:00.852948     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:00 ha-439113 kubelet[802]: E1115 11:12:00.853132     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-439113_kube-system(61daecae9db4def537bd68f54312f1ae)\"" pod="kube-system/kube-controller-manager-ha-439113" podUID="61daecae9db4def537bd68f54312f1ae"
	Nov 15 11:12:01 ha-439113 kubelet[802]: W1115 11:12:01.080611     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397 WatchSource:0}: Error finding container b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397: Status 404 returned error can't find the container with id b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397
	Nov 15 11:12:08 ha-439113 kubelet[802]: E1115 11:12:08.835937     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/54bc03e5aa3c6fcbbe6935a8420792c10e6b1241a59bf0fdde396399ed9639de/diff" to get inode usage: stat /var/lib/containers/storage/overlay/54bc03e5aa3c6fcbbe6935a8420792c10e6b1241a59bf0fdde396399ed9639de/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/3.log: no such file or directory
	Nov 15 11:12:08 ha-439113 kubelet[802]: E1115 11:12:08.849660     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eb045b83b5da536e46c3745bb2a8803b5c05df65a3052a5d8a939a5b61aff0de/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eb045b83b5da536e46c3745bb2a8803b5c05df65a3052a5d8a939a5b61aff0de/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/4.log: no such file or directory
	Nov 15 11:12:12 ha-439113 kubelet[802]: I1115 11:12:12.853172     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:12 ha-439113 kubelet[802]: E1115 11:12:12.853836     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-439113_kube-system(61daecae9db4def537bd68f54312f1ae)\"" pod="kube-system/kube-controller-manager-ha-439113" podUID="61daecae9db4def537bd68f54312f1ae"
	Nov 15 11:12:27 ha-439113 kubelet[802]: I1115 11:12:27.852165     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:28 ha-439113 kubelet[802]: I1115 11:12:28.302685     802 scope.go:117] "RemoveContainer" containerID="d86466a64c1754474a329490ff47ef2c868ab7ca5cee646b6d77e75e89205609"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-439113 -n ha-439113
helpers_test.go:269: (dbg) Run:  kubectl --context ha-439113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.060575876s)
ha_test.go:309: expected profile "ha-439113" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-439113\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-439113\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-439113\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-p
lugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fals
e,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-439113
helpers_test.go:243: (dbg) docker inspect ha-439113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	        "Created": "2025-11-15T10:52:17.169946413Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:10:01.380531105Z",
	            "FinishedAt": "2025-11-15T11:10:00.266325121Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/hosts",
	        "LogPath": "/var/lib/docker/containers/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc-json.log",
	        "Name": "/ha-439113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-439113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-439113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc",
	                "LowerDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d452d056adbe4761b4ed71cf77fac5474c808421de2b1194cc5ee8a23879de8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-439113",
	                "Source": "/var/lib/docker/volumes/ha-439113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-439113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-439113",
	                "name.minikube.sigs.k8s.io": "ha-439113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1552653af76d6dd7c6162ea9f89df1884eadd013a674c8ab945e116cac5292c2",
	            "SandboxKey": "/var/run/docker/netns/1552653af76d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33570"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33573"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33571"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33572"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-439113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:f1:61:d7:6f:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70b4341e58399e11a79033573f4328a7d843f08aeced339b6115cf0c5d327007",
	                    "EndpointID": "ecb9ec3e068adfb90b6cea007bf9d7996cf48ef1255455853c88ec25ad196b03",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-439113",
	                        "d546a4fc19d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-439113 -n ha-439113
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 logs -n 25: (1.77065069s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp testdata/cp-test.txt ha-439113-m04:/home/docker/cp-test.txt                                                             │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m04.txt │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m04_ha-439113.txt                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113.txt                                                 │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m02 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ cp      │ ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt               │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ ssh     │ ha-439113 ssh -n ha-439113-m03 sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:57 UTC │
	│ node    │ ha-439113 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:57 UTC │ 15 Nov 25 10:58 UTC │
	│ node    │ ha-439113 node start m02 --alsologtostderr -v 5                                                                                      │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 10:58 UTC │                     │
	│ node    │ ha-439113 node list --alsologtostderr -v 5                                                                                           │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:06 UTC │                     │
	│ stop    │ ha-439113 stop --alsologtostderr -v 5                                                                                                │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:06 UTC │ 15 Nov 25 11:07 UTC │
	│ start   │ ha-439113 start --wait true --alsologtostderr -v 5                                                                                   │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:07 UTC │ 15 Nov 25 11:09 UTC │
	│ node    │ ha-439113 node list --alsologtostderr -v 5                                                                                           │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │                     │
	│ node    │ ha-439113 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │ 15 Nov 25 11:09 UTC │
	│ stop    │ ha-439113 stop --alsologtostderr -v 5                                                                                                │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:09 UTC │ 15 Nov 25 11:10 UTC │
	│ start   │ ha-439113 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:10 UTC │                     │
	│ node    │ ha-439113 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-439113 │ jenkins │ v1.37.0 │ 15 Nov 25 11:16 UTC │ 15 Nov 25 11:17 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:10:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:10:01.082148  644414 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:10:01.082358  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082389  644414 out.go:374] Setting ErrFile to fd 2...
	I1115 11:10:01.082410  644414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:01.082810  644414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:10:01.083841  644414 out.go:368] Setting JSON to false
	I1115 11:10:01.084783  644414 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10352,"bootTime":1763194649,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:10:01.084926  644414 start.go:143] virtualization:  
	I1115 11:10:01.088178  644414 out.go:179] * [ha-439113] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:10:01.092058  644414 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:10:01.092190  644414 notify.go:221] Checking for updates...
	I1115 11:10:01.098137  644414 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:10:01.101114  644414 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:01.104087  644414 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:10:01.107082  644414 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:10:01.110104  644414 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:10:01.113527  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:01.114129  644414 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:10:01.149515  644414 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:10:01.149650  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.214815  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.203630276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.214940  644414 docker.go:319] overlay module found
	I1115 11:10:01.218203  644414 out.go:179] * Using the docker driver based on existing profile
	I1115 11:10:01.222067  644414 start.go:309] selected driver: docker
	I1115 11:10:01.222095  644414 start.go:930] validating driver "docker" against &{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.222249  644414 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:10:01.222374  644414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:10:01.290199  644414 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-15 11:10:01.272152631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:10:01.290633  644414 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:10:01.290666  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:01.290735  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:01.290785  644414 start.go:353] cluster config:
	{Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:01.295923  644414 out.go:179] * Starting "ha-439113" primary control-plane node in "ha-439113" cluster
	I1115 11:10:01.298854  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:01.301829  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:01.304672  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:01.304725  644414 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:10:01.304736  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:01.304766  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:01.304826  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:01.304837  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:01.305022  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.325510  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:01.325535  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:01.325557  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:01.325582  644414 start.go:360] acquireMachinesLock for ha-439113: {Name:mk8f5fddf42cbee62c5cd775824daee5e174c730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:01.325648  644414 start.go:364] duration metric: took 38.851µs to acquireMachinesLock for "ha-439113"
	I1115 11:10:01.325671  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:01.325676  644414 fix.go:54] fixHost starting: 
	I1115 11:10:01.325927  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.343552  644414 fix.go:112] recreateIfNeeded on ha-439113: state=Stopped err=<nil>
	W1115 11:10:01.343585  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:01.346902  644414 out.go:252] * Restarting existing docker container for "ha-439113" ...
	I1115 11:10:01.347040  644414 cli_runner.go:164] Run: docker start ha-439113
	I1115 11:10:01.611121  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:01.630743  644414 kic.go:430] container "ha-439113" state is running.
	I1115 11:10:01.631322  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:01.657614  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:01.657847  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:01.657906  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:01.682277  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:01.682596  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:01.682604  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:01.683536  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:10:04.832447  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:04.832472  644414 ubuntu.go:182] provisioning hostname "ha-439113"
	I1115 11:10:04.832543  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:04.850661  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:04.850981  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:04.850997  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113 && echo "ha-439113" | sudo tee /etc/hostname
	I1115 11:10:05.019162  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113
	
	I1115 11:10:05.019373  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:05.040944  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:05.041275  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:05.041312  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:05.193601  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:05.193631  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:05.193651  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:05.193661  644414 provision.go:84] configureAuth start
	I1115 11:10:05.193734  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:05.211992  644414 provision.go:143] copyHostCerts
	I1115 11:10:05.212041  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212076  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:05.212095  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:05.212172  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:05.212264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212287  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:05.212292  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:05.212324  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:05.212370  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212391  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:05.212398  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:05.212423  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:05.212513  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113 san=[127.0.0.1 192.168.49.2 ha-439113 localhost minikube]
	I1115 11:10:06.070863  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:06.070938  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:06.071014  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.090345  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.196902  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:06.196968  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:06.216309  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:06.216383  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1115 11:10:06.234832  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:06.234898  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:06.252396  644414 provision.go:87] duration metric: took 1.058711326s to configureAuth
	I1115 11:10:06.252465  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:06.252742  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:06.252850  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.270036  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:06.270362  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33569 <nil> <nil>}
	I1115 11:10:06.270383  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:06.614480  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:06.614501  644414 machine.go:97] duration metric: took 4.956644455s to provisionDockerMachine
	I1115 11:10:06.614512  644414 start.go:293] postStartSetup for "ha-439113" (driver="docker")
	I1115 11:10:06.614523  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:06.614593  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:06.614633  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.635190  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.741143  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:06.744492  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:06.744522  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:06.744534  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:06.744591  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:06.744682  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:06.744693  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:06.744792  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:06.752206  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:06.769623  644414 start.go:296] duration metric: took 155.096546ms for postStartSetup
	I1115 11:10:06.769735  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:06.769797  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.786747  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.889967  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:06.894381  644414 fix.go:56] duration metric: took 5.56869817s for fixHost
	I1115 11:10:06.894404  644414 start.go:83] releasing machines lock for "ha-439113", held for 5.568743749s
	I1115 11:10:06.894468  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 11:10:06.912478  644414 ssh_runner.go:195] Run: cat /version.json
	I1115 11:10:06.912503  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:06.912549  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.912557  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:10:06.935963  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:06.943189  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:10:07.140607  644414 ssh_runner.go:195] Run: systemctl --version
	I1115 11:10:07.147286  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:07.181632  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:07.186178  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:07.186315  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:07.194727  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:07.194754  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:07.194787  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:07.194836  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:07.211038  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:07.228463  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:07.228531  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:07.245230  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:07.259066  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:07.400677  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:07.528374  644414 docker.go:234] disabling docker service ...
	I1115 11:10:07.528452  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:07.544386  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:07.557994  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:07.673355  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:07.789554  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:07.802473  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:07.816520  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:07.816638  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.825590  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:07.825753  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.834624  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.843465  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.852151  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:07.860174  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.869179  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.877916  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:07.886986  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:07.894890  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:07.902588  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.022572  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:10:08.143861  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:10:08.144001  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:10:08.148082  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:10:08.148187  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:10:08.151776  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:10:08.176109  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:10:08.176190  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.206377  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:10:08.246152  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:10:08.249013  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:10:08.265246  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:10:08.269229  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.279381  644414 kubeadm.go:884] updating cluster {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:10:08.279538  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:08.279594  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.313662  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.313686  644414 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:10:08.313742  644414 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:10:08.341156  644414 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:10:08.341180  644414 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:10:08.341189  644414 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 11:10:08.341297  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:10:08.341383  644414 ssh_runner.go:195] Run: crio config
	I1115 11:10:08.417323  644414 cni.go:84] Creating CNI manager for ""
	I1115 11:10:08.417346  644414 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1115 11:10:08.417367  644414 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:10:08.417391  644414 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-439113 NodeName:ha-439113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:10:08.417529  644414 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-439113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:10:08.417554  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:10:08.417612  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:10:08.429604  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:08.429765  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:10:08.429836  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:10:08.437846  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:10:08.437927  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 11:10:08.445900  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 11:10:08.459668  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:10:08.472428  644414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1115 11:10:08.485415  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:10:08.498516  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:10:08.502240  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:10:08.512200  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:08.622281  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:10:08.654146  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.2
	I1115 11:10:08.654177  644414 certs.go:195] generating shared ca certs ...
	I1115 11:10:08.654195  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:08.654338  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:10:08.654393  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:10:08.654406  644414 certs.go:257] generating profile certs ...
	I1115 11:10:08.654496  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:10:08.654531  644414 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423
	I1115 11:10:08.654549  644414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1115 11:10:09.275584  644414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 ...
	I1115 11:10:09.275661  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423: {Name:mkcc7bf2bc49672369082197c2ea205c3b413e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.275872  644414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 ...
	I1115 11:10:09.275912  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423: {Name:mkddc44bc05ba35828280547efe210b00108cabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:09.276063  644414 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt
	I1115 11:10:09.276243  644414 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.3557f423 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key
	I1115 11:10:09.276437  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:10:09.276473  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:10:09.276509  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:10:09.276554  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:10:09.276590  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:10:09.276617  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:10:09.276659  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:10:09.276698  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:10:09.276726  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:10:09.276806  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:10:09.276885  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:10:09.276915  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:10:09.276959  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:10:09.277013  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:10:09.277057  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:10:09.277153  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:09.277220  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.277264  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.277297  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.277887  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:10:09.296564  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:10:09.314781  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:10:09.335633  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:10:09.353146  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:10:09.370859  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:10:09.388232  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:10:09.410774  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:10:09.439944  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:10:09.477014  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:10:09.526226  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:10:09.559717  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:10:09.610930  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:10:09.623460  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:10:09.643972  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.652807  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.653014  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:10:09.741237  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:10:09.749901  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:10:09.767184  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774726  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.774846  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:10:09.838136  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:10:09.846476  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:10:09.890099  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895038  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.895102  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:10:09.961757  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:10:09.976918  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:10:09.985687  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:10:10.033177  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:10:10.079291  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:10:10.125057  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:10:10.168941  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:10:10.219261  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:10:10.289307  644414 kubeadm.go:401] StartCluster: {Name:ha-439113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:10:10.289486  644414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:10:10.289574  644414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:10:10.354477  644414 cri.go:89] found id: "ab0d0c34b46d585c39a39112a9d96382b3c2d54b036b01e5aabb4c9adb26fe48"
	I1115 11:10:10.354514  644414 cri.go:89] found id: "f5462600e253c742d103a09b518cadafb5354c9b674147e2394344fc4f6cdd17"
	I1115 11:10:10.354519  644414 cri.go:89] found id: "c9aa769ac1e410d0690ad31ea1ef812bb7de4c70e937d471392caf66737a2862"
	I1115 11:10:10.354523  644414 cri.go:89] found id: "49f53dedd4e32694c1de85010bf005f40b10dfe1e581005787ce4f5229936764"
	I1115 11:10:10.354526  644414 cri.go:89] found id: "e0b918dd4970fd4deab2473f719156caad36c70e91836ec9407fd62c0e66c2f1"
	I1115 11:10:10.354530  644414 cri.go:89] found id: ""
	I1115 11:10:10.354587  644414 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:10:10.370661  644414 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:10:10Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:10:10.370748  644414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:10:10.382258  644414 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:10:10.382296  644414 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:10:10.382347  644414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:10:10.390626  644414 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:10:10.391102  644414 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-439113" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.391230  644414 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "ha-439113" cluster setting kubeconfig missing "ha-439113" context setting]
	I1115 11:10:10.391547  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.392161  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:10:10.393236  644414 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 11:10:10.393317  644414 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 11:10:10.393332  644414 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 11:10:10.393338  644414 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 11:10:10.393347  644414 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 11:10:10.393352  644414 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 11:10:10.394951  644414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:10:10.405841  644414 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1115 11:10:10.405873  644414 kubeadm.go:602] duration metric: took 23.570972ms to restartPrimaryControlPlane
	I1115 11:10:10.405883  644414 kubeadm.go:403] duration metric: took 116.586705ms to StartCluster
	I1115 11:10:10.405898  644414 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.405969  644414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:10:10.406686  644414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:10:10.406905  644414 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:10:10.406942  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:10:10.406961  644414 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:10:10.407533  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.412935  644414 out.go:179] * Enabled addons: 
	I1115 11:10:10.415804  644414 addons.go:515] duration metric: took 8.829529ms for enable addons: enabled=[]
	I1115 11:10:10.415842  644414 start.go:247] waiting for cluster config update ...
	I1115 11:10:10.415858  644414 start.go:256] writing updated cluster config ...
	I1115 11:10:10.419060  644414 out.go:203] 
	I1115 11:10:10.422348  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:10.422466  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.425867  644414 out.go:179] * Starting "ha-439113-m02" control-plane node in "ha-439113" cluster
	I1115 11:10:10.428658  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:10:10.431470  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:10:10.434231  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:10:10.434251  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:10:10.434373  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:10:10.434390  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:10:10.434509  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.434718  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:10:10.459579  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:10:10.459605  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:10:10.459619  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:10:10.459645  644414 start.go:360] acquireMachinesLock for ha-439113-m02: {Name:mk3e9fb80c1177aa3d9d60f93ad9a2d436f1d794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:10:10.459703  644414 start.go:364] duration metric: took 38.917µs to acquireMachinesLock for "ha-439113-m02"
	I1115 11:10:10.459726  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:10:10.459732  644414 fix.go:54] fixHost starting: m02
	I1115 11:10:10.460001  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.490667  644414 fix.go:112] recreateIfNeeded on ha-439113-m02: state=Stopped err=<nil>
	W1115 11:10:10.490698  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:10:10.494022  644414 out.go:252] * Restarting existing docker container for "ha-439113-m02" ...
	I1115 11:10:10.494103  644414 cli_runner.go:164] Run: docker start ha-439113-m02
	I1115 11:10:10.848234  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:10.876991  644414 kic.go:430] container "ha-439113-m02" state is running.
	I1115 11:10:10.877372  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:10.907598  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:10:10.907880  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:10:10.907948  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:10.946130  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:10.946438  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:10.946448  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:10:10.947277  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60346->127.0.0.1:33574: read: connection reset by peer
	I1115 11:10:14.161070  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.161137  644414 ubuntu.go:182] provisioning hostname "ha-439113-m02"
	I1115 11:10:14.161234  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.193112  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.193410  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.193421  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m02 && echo "ha-439113-m02" | sudo tee /etc/hostname
	I1115 11:10:14.414884  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m02
	
	I1115 11:10:14.415071  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:14.441593  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:14.441897  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:14.441920  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:10:14.655329  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:10:14.655419  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:10:14.655450  644414 ubuntu.go:190] setting up certificates
	I1115 11:10:14.655485  644414 provision.go:84] configureAuth start
	I1115 11:10:14.655584  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:14.684954  644414 provision.go:143] copyHostCerts
	I1115 11:10:14.684996  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685029  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:10:14.685035  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:10:14.685109  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:10:14.685187  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685203  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:10:14.685208  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:10:14.685233  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:10:14.685270  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685286  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:10:14.685290  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:10:14.685314  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:10:14.685358  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m02 san=[127.0.0.1 192.168.49.3 ha-439113-m02 localhost minikube]
	I1115 11:10:15.164962  644414 provision.go:177] copyRemoteCerts
	I1115 11:10:15.165087  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:10:15.165161  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.183565  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:15.309845  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:10:15.309910  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:10:15.352565  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:10:15.352638  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:10:15.389073  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:10:15.389137  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:10:15.436657  644414 provision.go:87] duration metric: took 781.140009ms to configureAuth
	I1115 11:10:15.436685  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:10:15.436943  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:15.437049  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:15.467485  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:10:15.467817  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33574 <nil> <nil>}
	I1115 11:10:15.467839  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:10:16.972469  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:10:16.972493  644414 machine.go:97] duration metric: took 6.064595432s to provisionDockerMachine
	I1115 11:10:16.972505  644414 start.go:293] postStartSetup for "ha-439113-m02" (driver="docker")
	I1115 11:10:16.972515  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:10:16.972579  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:10:16.972636  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.011353  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.141531  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:10:17.145724  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:10:17.145750  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:10:17.145761  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:10:17.145819  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:10:17.145893  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:10:17.145901  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:10:17.146000  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:10:17.153864  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:10:17.175408  644414 start.go:296] duration metric: took 202.888277ms for postStartSetup
	I1115 11:10:17.175529  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:10:17.175603  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.202540  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.314494  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:10:17.322089  644414 fix.go:56] duration metric: took 6.862349383s for fixHost
	I1115 11:10:17.322116  644414 start.go:83] releasing machines lock for "ha-439113-m02", held for 6.862399853s
	I1115 11:10:17.322193  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m02
	I1115 11:10:17.346984  644414 out.go:179] * Found network options:
	I1115 11:10:17.349992  644414 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 11:10:17.357013  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:10:17.357074  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:10:17.357145  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:10:17.357204  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.357473  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:10:17.357528  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m02
	I1115 11:10:17.392713  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.393588  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m02/id_rsa Username:docker}
	I1115 11:10:17.599074  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:10:17.766809  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:10:17.766905  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:10:17.789163  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:10:17.789191  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:10:17.789231  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:10:17.789289  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:10:17.815110  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:10:17.838070  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:10:17.838143  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:10:17.860257  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:10:17.879590  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:10:18.110145  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:10:18.361820  644414 docker.go:234] disabling docker service ...
	I1115 11:10:18.361900  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:10:18.384569  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:10:18.416731  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:10:18.641786  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:10:18.837399  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:10:18.857492  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:10:18.878074  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:10:18.878149  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.894400  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:10:18.894493  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.905139  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.919066  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.934192  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:10:18.947793  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.962215  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.975913  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:10:18.990422  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:10:19.001078  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:10:19.010948  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:10:19.243052  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:11:49.588377  644414 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.345288768s)
	I1115 11:11:49.588399  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:11:49.588453  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:11:49.592631  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:11:49.592694  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:11:49.596673  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:11:49.627565  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:11:49.627655  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.657574  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:11:49.692786  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:11:49.695732  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:11:49.698667  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:11:49.715635  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:11:49.719827  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:49.729557  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:11:49.729790  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:49.730057  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:11:49.747197  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:11:49.747477  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.3
	I1115 11:11:49.747492  644414 certs.go:195] generating shared ca certs ...
	I1115 11:11:49.747509  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:11:49.747651  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:11:49.747712  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:11:49.747723  644414 certs.go:257] generating profile certs ...
	I1115 11:11:49.747793  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key
	I1115 11:11:49.747854  644414 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key.29032bc8
	I1115 11:11:49.747896  644414 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key
	I1115 11:11:49.747908  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:11:49.747922  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:11:49.747939  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:11:49.747953  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:11:49.747968  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 11:11:49.747979  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 11:11:49.747995  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 11:11:49.748005  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 11:11:49.748058  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:11:49.748100  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:11:49.748113  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:11:49.748139  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:11:49.748172  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:11:49.748196  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:11:49.748244  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:11:49.748274  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:11:49.748290  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:11:49.748302  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:49.748361  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 11:11:49.766640  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 11:11:49.865171  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 11:11:49.869248  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 11:11:49.877385  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 11:11:49.881661  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 11:11:49.890592  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 11:11:49.894372  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 11:11:49.902879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 11:11:49.906594  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 11:11:49.914879  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 11:11:49.918911  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 11:11:49.928251  644414 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 11:11:49.931713  644414 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 11:11:49.939808  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:11:49.959417  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:11:49.979171  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:11:49.999374  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:11:50.034447  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:11:50.055956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:11:50.075858  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:11:50.096569  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:11:50.123534  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:11:50.145099  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:11:50.165838  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:11:50.187631  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 11:11:50.201727  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 11:11:50.215561  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 11:11:50.228704  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 11:11:50.243716  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 11:11:50.256646  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 11:11:50.274083  644414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 11:11:50.289451  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:11:50.296096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:11:50.304816  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308605  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.308696  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:11:50.349933  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:11:50.357859  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:11:50.366131  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370090  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.370184  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:11:50.411529  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:11:50.419530  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:11:50.428122  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.431990  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.432078  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:11:50.473336  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:11:50.481905  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:11:50.485884  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:11:50.529145  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:11:50.575458  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:11:50.618147  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:11:50.660345  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:11:50.701441  644414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:11:50.742918  644414 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 11:11:50.743050  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:11:50.743086  644414 kube-vip.go:115] generating kube-vip config ...
	I1115 11:11:50.743137  644414 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 11:11:50.756533  644414 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:11:50.756661  644414 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 11:11:50.756809  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:11:50.766452  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:11:50.766519  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 11:11:50.774299  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:11:50.787555  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:11:50.801348  644414 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 11:11:50.815426  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:11:50.819361  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:11:50.829846  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:50.971817  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:50.986595  644414 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:11:50.987008  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:50.990541  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:11:50.993289  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:11:51.129111  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:11:51.143975  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:11:51.144052  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:11:51.144377  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175109  644414 node_ready.go:49] node "ha-439113-m02" is "Ready"
	I1115 11:11:54.175142  644414 node_ready.go:38] duration metric: took 3.030741263s for node "ha-439113-m02" to be "Ready" ...
	I1115 11:11:54.175156  644414 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:11:54.175217  644414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:11:54.191139  644414 api_server.go:72] duration metric: took 3.204498804s to wait for apiserver process to appear ...
	I1115 11:11:54.191165  644414 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:11:54.191183  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.270987  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 11:11:54.271020  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 11:11:54.691298  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:54.702970  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:54.703005  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.191248  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.208784  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.208820  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:55.691283  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:55.701010  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:55.701040  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.191695  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.205744  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:11:56.205779  644414 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:11:56.691307  644414 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 11:11:56.703521  644414 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 11:11:56.706435  644414 api_server.go:141] control plane version: v1.34.1
	I1115 11:11:56.706475  644414 api_server.go:131] duration metric: took 2.515302396s to wait for apiserver health ...
	I1115 11:11:56.706484  644414 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:11:56.718211  644414 system_pods.go:59] 26 kube-system pods found
	I1115 11:11:56.718249  644414 system_pods.go:61] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718259  644414 system_pods.go:61] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.718265  644414 system_pods.go:61] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.718282  644414 system_pods.go:61] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.718287  644414 system_pods.go:61] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.718291  644414 system_pods.go:61] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.718295  644414 system_pods.go:61] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.718299  644414 system_pods.go:61] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.718305  644414 system_pods.go:61] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.718316  644414 system_pods.go:61] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.718322  644414 system_pods.go:61] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.718327  644414 system_pods.go:61] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.718337  644414 system_pods.go:61] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.718352  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.718361  644414 system_pods.go:61] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.718366  644414 system_pods.go:61] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.718373  644414 system_pods.go:61] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.718384  644414 system_pods.go:61] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.718389  644414 system_pods.go:61] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.718395  644414 system_pods.go:61] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.718405  644414 system_pods.go:61] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.718410  644414 system_pods.go:61] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.718414  644414 system_pods.go:61] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.718426  644414 system_pods.go:61] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.718432  644414 system_pods.go:61] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.718438  644414 system_pods.go:61] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.718444  644414 system_pods.go:74] duration metric: took 11.954415ms to wait for pod list to return data ...
	I1115 11:11:56.718453  644414 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:11:56.724493  644414 default_sa.go:45] found service account: "default"
	I1115 11:11:56.724536  644414 default_sa.go:55] duration metric: took 6.072136ms for default service account to be created ...
	I1115 11:11:56.724547  644414 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:11:56.819602  644414 system_pods.go:86] 26 kube-system pods found
	I1115 11:11:56.819647  644414 system_pods.go:89] "coredns-66bc5c9577-4g6sm" [9460f377-28d8-418c-9dab-9428dfbfca1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819658  644414 system_pods.go:89] "coredns-66bc5c9577-mlm6m" [d28d9bc0-5e46-4c01-8b62-aa0ef429d935] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:11:56.819664  644414 system_pods.go:89] "etcd-ha-439113" [cf7697cf-fe7e-4078-a3cc-92e0bdeaec7b] Running
	I1115 11:11:56.819670  644414 system_pods.go:89] "etcd-ha-439113-m02" [cb1c1f4d-03b1-462b-a3f7-9a0adbc017e7] Running
	I1115 11:11:56.819674  644414 system_pods.go:89] "etcd-ha-439113-m03" [5e59ce68-9c25-4639-ac5a-1f55855c2a60] Running
	I1115 11:11:56.819679  644414 system_pods.go:89] "kindnet-4k2k2" [5a741bbc-f2ab-4432-b229-309437f9455c] Running
	I1115 11:11:56.819694  644414 system_pods.go:89] "kindnet-kxl4t" [99aa3cce-8825-4785-a8c2-b42146240e09] Running
	I1115 11:11:56.819703  644414 system_pods.go:89] "kindnet-mcj42" [d1a7e2da-17cd-4e3c-a515-7c308ce20713] Running
	I1115 11:11:56.819711  644414 system_pods.go:89] "kindnet-q4kpj" [5da9cefc-49b3-4bc2-8cb6-db44ed04b358] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:11:56.819721  644414 system_pods.go:89] "kube-apiserver-ha-439113" [48976f63-de62-482c-8f65-edae19380332] Running
	I1115 11:11:56.819726  644414 system_pods.go:89] "kube-apiserver-ha-439113-m02" [b9d0652f-fada-4ba5-8f3d-812d9da42bc5] Running
	I1115 11:11:56.819730  644414 system_pods.go:89] "kube-apiserver-ha-439113-m03" [46354a8c-2a61-4934-8b1a-57c563aa326b] Running
	I1115 11:11:56.819738  644414 system_pods.go:89] "kube-controller-manager-ha-439113" [15798d9f-9f01-402c-b2c4-a720bf545ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:11:56.819747  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m02" [3cf8f9d9-6855-47a3-86f9-c593cca08eef] Running
	I1115 11:11:56.819752  644414 system_pods.go:89] "kube-controller-manager-ha-439113-m03" [555d953c-b848-4daa-90c5-07b51c5c7722] Running
	I1115 11:11:56.819756  644414 system_pods.go:89] "kube-proxy-2fgtm" [7a3fd93a-54d8-4821-a49a-6839ed65fe69] Running
	I1115 11:11:56.819770  644414 system_pods.go:89] "kube-proxy-k7bcn" [4718f104-1eea-4e92-b339-dc6ae067eee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:11:56.819778  644414 system_pods.go:89] "kube-proxy-kgftx" [0fc96517-198e-406e-8b54-cfca391d6811] Running
	I1115 11:11:56.819783  644414 system_pods.go:89] "kube-proxy-njlxj" [9150615b-96b9-416b-a5ca-79c380a8a9cb] Running
	I1115 11:11:56.819789  644414 system_pods.go:89] "kube-scheduler-ha-439113" [974f6565-b0f8-4da8-abc5-b5f148e0f47c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:11:56.819797  644414 system_pods.go:89] "kube-scheduler-ha-439113-m02" [1acb9311-eb3c-4c86-a821-5aedf8998aa5] Running
	I1115 11:11:56.819803  644414 system_pods.go:89] "kube-scheduler-ha-439113-m03" [e18cb155-9e7b-43e1-818b-bfff6a289f39] Running
	I1115 11:11:56.819811  644414 system_pods.go:89] "kube-vip-ha-439113" [8ed03cf0-14c3-4946-a73d-8cc5545156cb] Running
	I1115 11:11:56.819815  644414 system_pods.go:89] "kube-vip-ha-439113-m02" [466791ec-d6aa-4e15-9274-3af6f2d3a138] Running
	I1115 11:11:56.819819  644414 system_pods.go:89] "kube-vip-ha-439113-m03" [c0ddae32-acc6-4cda-8dde-084b2eea14a8] Running
	I1115 11:11:56.819824  644414 system_pods.go:89] "storage-provisioner" [6a63ca66-7de2-40d8-96f0-a99da4ba3411] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:11:56.819841  644414 system_pods.go:126] duration metric: took 95.282586ms to wait for k8s-apps to be running ...
	I1115 11:11:56.819854  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:11:56.819918  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:11:56.837030  644414 system_svc.go:56] duration metric: took 17.155047ms WaitForService to wait for kubelet
	I1115 11:11:56.837061  644414 kubeadm.go:587] duration metric: took 5.85042521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:11:56.837082  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:11:56.841207  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841239  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841253  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841257  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841262  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:11:56.841265  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:11:56.841282  644414 node_conditions.go:105] duration metric: took 4.194343ms to run NodePressure ...
	I1115 11:11:56.841300  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:11:56.841324  644414 start.go:256] writing updated cluster config ...
	I1115 11:11:56.844944  644414 out.go:203] 
	I1115 11:11:56.848069  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:11:56.848191  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.851562  644414 out.go:179] * Starting "ha-439113-m04" worker node in "ha-439113" cluster
	I1115 11:11:56.855417  644414 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:11:56.858314  644414 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:11:56.861196  644414 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:11:56.861243  644414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:11:56.861453  644414 cache.go:65] Caching tarball of preloaded images
	I1115 11:11:56.861539  644414 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:11:56.861554  644414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:11:56.861725  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:56.894239  644414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:11:56.894262  644414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:11:56.894277  644414 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:11:56.894301  644414 start.go:360] acquireMachinesLock for ha-439113-m04: {Name:mke6e857e5b25fb7a1d96f7fe08934c7b44258f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:11:56.894360  644414 start.go:364] duration metric: took 38.252µs to acquireMachinesLock for "ha-439113-m04"
	I1115 11:11:56.894384  644414 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:11:56.894391  644414 fix.go:54] fixHost starting: m04
	I1115 11:11:56.894639  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:56.934538  644414 fix.go:112] recreateIfNeeded on ha-439113-m04: state=Stopped err=<nil>
	W1115 11:11:56.934571  644414 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:11:56.937723  644414 out.go:252] * Restarting existing docker container for "ha-439113-m04" ...
	I1115 11:11:56.937813  644414 cli_runner.go:164] Run: docker start ha-439113-m04
	I1115 11:11:57.292353  644414 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:11:57.320590  644414 kic.go:430] container "ha-439113-m04" state is running.
	I1115 11:11:57.320978  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:11:57.343942  644414 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/config.json ...
	I1115 11:11:57.344181  644414 machine.go:94] provisionDockerMachine start ...
	I1115 11:11:57.344243  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:11:57.365933  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:11:57.366241  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:11:57.366255  644414 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:11:57.366995  644414 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:12:00.666212  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.666285  644414 ubuntu.go:182] provisioning hostname "ha-439113-m04"
	I1115 11:12:00.666399  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.703141  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.703457  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.703468  644414 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-439113-m04 && echo "ha-439113-m04" | sudo tee /etc/hostname
	I1115 11:12:00.898855  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-439113-m04
	
	I1115 11:12:00.898950  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:00.948730  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:00.949093  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:00.949120  644414 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-439113-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-439113-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-439113-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:12:01.162002  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:12:01.162071  644414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:12:01.162106  644414 ubuntu.go:190] setting up certificates
	I1115 11:12:01.162147  644414 provision.go:84] configureAuth start
	I1115 11:12:01.162228  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:01.189297  644414 provision.go:143] copyHostCerts
	I1115 11:12:01.189345  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189381  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:12:01.189387  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:12:01.189469  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:12:01.189552  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189569  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:12:01.189574  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:12:01.189602  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:12:01.189643  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189658  644414 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:12:01.189662  644414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:12:01.189686  644414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:12:01.189732  644414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.ha-439113-m04 san=[127.0.0.1 192.168.49.5 ha-439113-m04 localhost minikube]
	I1115 11:12:01.793644  644414 provision.go:177] copyRemoteCerts
	I1115 11:12:01.793724  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:12:01.793769  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:01.813786  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:01.932159  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 11:12:01.932221  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:12:01.959503  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 11:12:01.959565  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:12:01.985894  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 11:12:01.985956  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:12:02.016893  644414 provision.go:87] duration metric: took 854.716001ms to configureAuth
	I1115 11:12:02.016972  644414 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:12:02.017324  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:02.017494  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.042340  644414 main.go:143] libmachine: Using SSH client type: native
	I1115 11:12:02.042641  644414 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33579 <nil> <nil>}
	I1115 11:12:02.042657  644414 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:12:02.421793  644414 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:12:02.421855  644414 machine.go:97] duration metric: took 5.077657106s to provisionDockerMachine
	I1115 11:12:02.421891  644414 start.go:293] postStartSetup for "ha-439113-m04" (driver="docker")
	I1115 11:12:02.421937  644414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:12:02.422045  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:12:02.422113  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.441735  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.549972  644414 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:12:02.553292  644414 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:12:02.553326  644414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:12:02.553339  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:12:02.553398  644414 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:12:02.553481  644414 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:12:02.553492  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /etc/ssl/certs/5865612.pem
	I1115 11:12:02.553591  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:12:02.561640  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:02.581188  644414 start.go:296] duration metric: took 159.246745ms for postStartSetup
	I1115 11:12:02.581283  644414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:12:02.581334  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.598560  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.702117  644414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:12:02.707693  644414 fix.go:56] duration metric: took 5.813294693s for fixHost
	I1115 11:12:02.707719  644414 start.go:83] releasing machines lock for "ha-439113-m04", held for 5.813345581s
	I1115 11:12:02.707815  644414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 11:12:02.727805  644414 out.go:179] * Found network options:
	I1115 11:12:02.730701  644414 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 11:12:02.733528  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733564  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733599  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 11:12:02.733615  644414 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 11:12:02.733685  644414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:12:02.733735  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.734056  644414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:12:02.734115  644414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 11:12:02.762180  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.770444  644414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 11:12:02.906742  644414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:12:02.982777  644414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:12:02.982870  644414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:12:02.991311  644414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:12:02.991334  644414 start.go:496] detecting cgroup driver to use...
	I1115 11:12:02.991372  644414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:12:02.991426  644414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:12:03.010259  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:12:03.026209  644414 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:12:03.026295  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:12:03.042235  644414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:12:03.056541  644414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:12:03.207440  644414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:12:03.335536  644414 docker.go:234] disabling docker service ...
	I1115 11:12:03.335651  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:12:03.353883  644414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:12:03.369431  644414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:12:03.486211  644414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:12:03.610710  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:12:03.625360  644414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:12:03.641312  644414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:12:03.641378  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.651264  644414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:12:03.651338  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.665109  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.675589  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.686503  644414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:12:03.694865  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.705871  644414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.714726  644414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:12:03.723852  644414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:12:03.731853  644414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:12:03.740511  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:03.853255  644414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:12:04.003040  644414 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:12:04.003163  644414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:12:04.007573  644414 start.go:564] Will wait 60s for crictl version
	I1115 11:12:04.007728  644414 ssh_runner.go:195] Run: which crictl
	I1115 11:12:04.014385  644414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:12:04.042291  644414 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:12:04.042400  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.076162  644414 ssh_runner.go:195] Run: crio --version
	I1115 11:12:04.110265  644414 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:12:04.113250  644414 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 11:12:04.116130  644414 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 11:12:04.118985  644414 cli_runner.go:164] Run: docker network inspect ha-439113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:12:04.135746  644414 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 11:12:04.140419  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.151141  644414 mustload.go:66] Loading cluster: ha-439113
	I1115 11:12:04.151383  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.151632  644414 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:12:04.169829  644414 host.go:66] Checking if "ha-439113" exists ...
	I1115 11:12:04.170121  644414 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113 for IP: 192.168.49.5
	I1115 11:12:04.170137  644414 certs.go:195] generating shared ca certs ...
	I1115 11:12:04.170152  644414 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:12:04.170287  644414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:12:04.170332  644414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:12:04.170347  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 11:12:04.170362  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 11:12:04.170377  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 11:12:04.170392  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 11:12:04.170455  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:12:04.170489  644414 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:12:04.170502  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:12:04.170528  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:12:04.170554  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:12:04.170579  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:12:04.170625  644414 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:12:04.170653  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem -> /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.170666  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.170682  644414 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.170703  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:12:04.192999  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:12:04.214491  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:12:04.238386  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:12:04.261791  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:12:04.282186  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:12:04.301663  644414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:12:04.323494  644414 ssh_runner.go:195] Run: openssl version
	I1115 11:12:04.330506  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:12:04.339641  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343359  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.343471  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:12:04.384944  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:12:04.393726  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:12:04.401885  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405917  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.405984  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:12:04.448096  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:12:04.456341  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:12:04.464809  644414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469548  644414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.469657  644414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:12:04.512809  644414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:12:04.521564  644414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:12:04.525477  644414 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:12:04.525571  644414 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1115 11:12:04.525671  644414 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-439113-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-439113 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:12:04.525750  644414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:12:04.534631  644414 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:12:04.534732  644414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1115 11:12:04.542762  644414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 11:12:04.555474  644414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:12:04.568549  644414 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 11:12:04.572246  644414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:12:04.582645  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.720397  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.734431  644414 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1115 11:12:04.734793  644414 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:12:04.737605  644414 out.go:179] * Verifying Kubernetes components...
	I1115 11:12:04.740524  644414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:12:04.870273  644414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:12:04.886167  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 11:12:04.886294  644414 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 11:12:04.886567  644414 node_ready.go:35] waiting up to 6m0s for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890505  644414 node_ready.go:49] node "ha-439113-m04" is "Ready"
	I1115 11:12:04.890532  644414 node_ready.go:38] duration metric: took 3.920221ms for node "ha-439113-m04" to be "Ready" ...
	I1115 11:12:04.890569  644414 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:12:04.890627  644414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:12:04.906249  644414 system_svc.go:56] duration metric: took 15.693042ms WaitForService to wait for kubelet
	I1115 11:12:04.906349  644414 kubeadm.go:587] duration metric: took 171.724556ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:12:04.906397  644414 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:12:04.916259  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916376  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916421  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916457  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916477  644414 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:12:04.916512  644414 node_conditions.go:123] node cpu capacity is 2
	I1115 11:12:04.916538  644414 node_conditions.go:105] duration metric: took 10.120472ms to run NodePressure ...
	I1115 11:12:04.916592  644414 start.go:242] waiting for startup goroutines ...
	I1115 11:12:04.916629  644414 start.go:256] writing updated cluster config ...
	I1115 11:12:04.917071  644414 ssh_runner.go:195] Run: rm -f paused
	I1115 11:12:04.922331  644414 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:12:04.922989  644414 kapi.go:59] client config for ha-439113: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/ha-439113/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:12:04.955742  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:12:06.963336  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:08.980310  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:11.479328  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:13.964446  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:16.463626  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:18.465383  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:20.962686  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:22.964048  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:24.966447  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:27.463942  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	W1115 11:12:29.466713  644414 pod_ready.go:104] pod "coredns-66bc5c9577-4g6sm" is not "Ready", error: <nil>
	I1115 11:12:30.462795  644414 pod_ready.go:94] pod "coredns-66bc5c9577-4g6sm" is "Ready"
	I1115 11:12:30.462820  644414 pod_ready.go:86] duration metric: took 25.506978071s for pod "coredns-66bc5c9577-4g6sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.462830  644414 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.469415  644414 pod_ready.go:94] pod "coredns-66bc5c9577-mlm6m" is "Ready"
	I1115 11:12:30.469441  644414 pod_ready.go:86] duration metric: took 6.60411ms for pod "coredns-66bc5c9577-mlm6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.473231  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480070  644414 pod_ready.go:94] pod "etcd-ha-439113" is "Ready"
	I1115 11:12:30.480096  644414 pod_ready.go:86] duration metric: took 6.837381ms for pod "etcd-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.480106  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486550  644414 pod_ready.go:94] pod "etcd-ha-439113-m02" is "Ready"
	I1115 11:12:30.486578  644414 pod_ready.go:86] duration metric: took 6.465838ms for pod "etcd-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.486589  644414 pod_ready.go:83] waiting for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.657170  644414 request.go:683] "Waited before sending request" delay="167.271906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:30.660251  644414 pod_ready.go:99] pod "etcd-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "etcd-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:30.660271  644414 pod_ready.go:86] duration metric: took 173.674417ms for pod "etcd-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:30.856532  644414 request.go:683] "Waited before sending request" delay="196.157902ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 11:12:30.862230  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.056631  644414 request.go:683] "Waited before sending request" delay="194.303781ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113"
	I1115 11:12:31.256567  644414 request.go:683] "Waited before sending request" delay="196.320457ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:31.260364  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113" is "Ready"
	I1115 11:12:31.260440  644414 pod_ready.go:86] duration metric: took 398.184225ms for pod "kube-apiserver-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.260460  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.456733  644414 request.go:683] "Waited before sending request" delay="196.195936ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m02"
	I1115 11:12:31.657283  644414 request.go:683] "Waited before sending request" delay="189.364553ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:31.669486  644414 pod_ready.go:94] pod "kube-apiserver-ha-439113-m02" is "Ready"
	I1115 11:12:31.669527  644414 pod_ready.go:86] duration metric: took 409.053455ms for pod "kube-apiserver-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.669545  644414 pod_ready.go:83] waiting for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:31.856759  644414 request.go:683] "Waited before sending request" delay="187.140315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-439113-m03"
	I1115 11:12:32.057081  644414 request.go:683] "Waited before sending request" delay="194.340659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m03"
	I1115 11:12:32.060246  644414 pod_ready.go:99] pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace is gone: node "ha-439113-m03" hosting pod "kube-apiserver-ha-439113-m03" is not found/running (skipping!): nodes "ha-439113-m03" not found
	I1115 11:12:32.060269  644414 pod_ready.go:86] duration metric: took 390.716754ms for pod "kube-apiserver-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.256765  644414 request.go:683] "Waited before sending request" delay="196.346784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 11:12:32.260967  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:32.457411  644414 request.go:683] "Waited before sending request" delay="196.343854ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:32.656543  644414 request.go:683] "Waited before sending request" delay="195.259075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:32.857312  644414 request.go:683] "Waited before sending request" delay="95.237723ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-439113"
	I1115 11:12:33.056759  644414 request.go:683] "Waited before sending request" delay="193.348543ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.456512  644414 request.go:683] "Waited before sending request" delay="191.213474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:33.857248  644414 request.go:683] "Waited before sending request" delay="92.163849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	W1115 11:12:34.268915  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:36.769187  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:38.769594  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:40.775431  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	W1115 11:12:43.268655  644414 pod_ready.go:104] pod "kube-controller-manager-ha-439113" is not "Ready", error: <nil>
	I1115 11:12:45.275032  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113" is "Ready"
	I1115 11:12:45.275075  644414 pod_ready.go:86] duration metric: took 13.01407493s for pod "kube-controller-manager-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.275087  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305482  644414 pod_ready.go:94] pod "kube-controller-manager-ha-439113-m02" is "Ready"
	I1115 11:12:45.305509  644414 pod_ready.go:86] duration metric: took 30.414418ms for pod "kube-controller-manager-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.305520  644414 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.308592  644414 pod_ready.go:99] pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace is gone: getting pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace (will retry): pods "kube-controller-manager-ha-439113-m03" not found
	I1115 11:12:45.308616  644414 pod_ready.go:86] duration metric: took 3.088777ms for pod "kube-controller-manager-ha-439113-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.312595  644414 pod_ready.go:83] waiting for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319584  644414 pod_ready.go:94] pod "kube-proxy-2fgtm" is "Ready"
	I1115 11:12:45.319658  644414 pod_ready.go:86] duration metric: took 6.96691ms for pod "kube-proxy-2fgtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.319684  644414 pod_ready.go:83] waiting for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333364  644414 pod_ready.go:94] pod "kube-proxy-k7bcn" is "Ready"
	I1115 11:12:45.333446  644414 pod_ready.go:86] duration metric: took 13.743575ms for pod "kube-proxy-k7bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.333472  644414 pod_ready.go:83] waiting for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.461841  644414 request.go:683] "Waited before sending request" delay="128.26876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kgftx"
	I1115 11:12:45.662133  644414 request.go:683] "Waited before sending request" delay="196.336603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:45.666231  644414 pod_ready.go:94] pod "kube-proxy-kgftx" is "Ready"
	I1115 11:12:45.666259  644414 pod_ready.go:86] duration metric: took 332.766862ms for pod "kube-proxy-kgftx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:45.862402  644414 request.go:683] "Waited before sending request" delay="196.047882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1115 11:12:45.868100  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.061503  644414 request.go:683] "Waited before sending request" delay="193.299208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113"
	I1115 11:12:46.262349  644414 request.go:683] "Waited before sending request" delay="196.337092ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113"
	I1115 11:12:46.266390  644414 pod_ready.go:94] pod "kube-scheduler-ha-439113" is "Ready"
	I1115 11:12:46.266415  644414 pod_ready.go:86] duration metric: took 398.289218ms for pod "kube-scheduler-ha-439113" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.266426  644414 pod_ready.go:83] waiting for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:12:46.461857  644414 request.go:683] "Waited before sending request" delay="195.354736ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:46.662164  644414 request.go:683] "Waited before sending request" delay="196.315389ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:46.862451  644414 request.go:683] "Waited before sending request" delay="95.198714ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-439113-m02"
	I1115 11:12:47.062064  644414 request.go:683] "Waited before sending request" delay="194.32444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.462004  644414 request.go:683] "Waited before sending request" delay="191.259764ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	I1115 11:12:47.862129  644414 request.go:683] "Waited before sending request" delay="91.206426ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-439113-m02"
	W1115 11:12:48.273067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:50.273503  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:52.273873  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:54.774253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:56.774741  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:12:59.273054  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:01.273531  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:03.274007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:05.773995  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:08.274070  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:10.774950  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:13.273142  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:15.774523  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:18.275146  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:20.775066  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:23.273644  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:25.772983  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:27.773086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:29.774439  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:32.274282  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:34.773274  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:36.774007  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:38.774499  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:41.272920  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:43.272980  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:45.290069  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:47.774370  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:49.775099  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:52.273471  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:54.774040  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:56.776828  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:13:58.777477  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:01.274086  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:03.774603  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:06.274270  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:08.776333  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:11.274406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:13.775288  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:16.274470  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:18.774609  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:21.275329  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:23.773704  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:25.781356  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:28.273802  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:30.773867  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:33.273730  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:35.274388  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:37.774988  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:40.273650  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:42.274574  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:44.775136  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:47.273253  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:49.774129  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:52.274209  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:54.773957  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:56.774057  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:14:58.774103  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:00.794798  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:03.273466  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:05.274892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:07.773906  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:09.775150  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:12.274372  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:14.773892  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:16.774210  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:19.273576  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:21.773796  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:24.273997  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:26.274175  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:28.775134  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:31.275044  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:33.773408  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:35.774067  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:37.774322  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:40.273391  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:42.275088  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:44.773835  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:46.773944  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:49.273345  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:51.274206  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:53.275406  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:55.276298  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:57.773509  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:15:59.773622  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:01.773991  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	W1115 11:16:04.273687  644414 pod_ready.go:104] pod "kube-scheduler-ha-439113-m02" is not "Ready", error: <nil>
	I1115 11:16:04.922792  644414 pod_ready.go:86] duration metric: took 3m18.656348919s for pod "kube-scheduler-ha-439113-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:16:04.922828  644414 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1115 11:16:04.922844  644414 pod_ready.go:40] duration metric: took 4m0.000432421s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:16:04.926118  644414 out.go:203] 
	W1115 11:16:04.928902  644414 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1115 11:16:04.931693  644414 out.go:203] 
	
	
	==> CRI-O <==
	Nov 15 11:12:27 ha-439113 crio[666]: time="2025-11-15T11:12:27.920544626Z" level=info msg="Started container" PID=1433 containerID=45eb4921c003b25c5119ab01196399bab3eb8157fb07652ba3dcd97194afeb00 description=kube-system/kube-controller-manager-ha-439113/kube-controller-manager id=fa832c19-eb18-47af-80d3-4790cad3225e name=/runtime.v1.RuntimeService/StartContainer sandboxID=21e90ac59d7247826fca1e350ef4c6d641540ffb41065bb8d5e3136341a1f7e4
	Nov 15 11:12:28 ha-439113 conmon[1137]: conmon d86466a64c1754474a32 <ninfo>: container 1142 exited with status 1
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.303366553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=776f7c67-301a-4655-9f1e-c0f4d2b6bdaf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.306045894Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01b7c975-ef4d-4609-85fa-e323353431bd name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.308511994Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8f6d110a-f199-4160-b315-87aac4712b71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.308610668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.319769952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320004347Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1658f23bf43e3861272003631cb2125f6cd69132a0a16a46de920e7b647021eb/merged/etc/passwd: no such file or directory"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320027059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1658f23bf43e3861272003631cb2125f6cd69132a0a16a46de920e7b647021eb/merged/etc/group: no such file or directory"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.320305901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.388496736Z" level=info msg="Created container 4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68: kube-system/storage-provisioner/storage-provisioner" id=8f6d110a-f199-4160-b315-87aac4712b71 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.38961912Z" level=info msg="Starting container: 4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68" id=bfef2a5f-46f3-44e9-9266-3ac15c3e2f60 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:12:28 ha-439113 crio[666]: time="2025-11-15T11:12:28.393175299Z" level=info msg="Started container" PID=1445 containerID=4307de9c87d365cc4c90d647228026e786575caa2299668420c19c736afced68 description=kube-system/storage-provisioner/storage-provisioner id=bfef2a5f-46f3-44e9-9266-3ac15c3e2f60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94d3e897f0476e4f3abaa049d7990fde57c5406c8c5bb70e73a7146a92b5c99a
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.422814838Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.426273738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.426311481Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.42633375Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.435633901Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.43567025Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.435692969Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443292786Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443437303Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.443463231Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.447544594Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:12:38 ha-439113 crio[666]: time="2025-11-15T11:12:38.447580648Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	4307de9c87d36       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       4                   94d3e897f0476       storage-provisioner                 kube-system
	45eb4921c003b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   6                   21e90ac59d724       kube-controller-manager-ha-439113   kube-system
	56ca04edf5389       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   2                   b9f35a414830a       busybox-7b57f96db7-vddcm            default
	16ebc70b03ad3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                2                   dbf5fcdbf92d1       kube-proxy-k7bcn                    kube-system
	ff8f6f3f30d64       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   2                   d43213c9afa20       coredns-66bc5c9577-mlm6m            kube-system
	66d3cca12da72       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   2                   8504950f9102e       coredns-66bc5c9577-4g6sm            kube-system
	624e9c4484de9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               2                   02b3165dd3170       kindnet-q4kpj                       kube-system
	d86466a64c175       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       3                   94d3e897f0476       storage-provisioner                 kube-system
	be71898116747       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   5                   21e90ac59d724       kube-controller-manager-ha-439113   kube-system
	d24d48c3f9b01       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            3                   80d29a5d57c81       kube-apiserver-ha-439113            kube-system
	ab0d0c34b46d5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      2                   e3e01caa47fdb       etcd-ha-439113                      kube-system
	f5462600e253c       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  2                   c0b629ba4b9ea       kube-vip-ha-439113                  kube-system
	c9aa769ac1e41       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Exited              kube-apiserver            2                   80d29a5d57c81       kube-apiserver-ha-439113            kube-system
	e0b918dd4970f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   1552e5cdb042a       kube-scheduler-ha-439113            kube-system
	
	
	==> coredns [66d3cca12da72808d1018e1a6ec972546fda6374c31dd377d5d8dc684e2ceb3e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34700 - 4439 "HINFO IN 6986068788273380099.6825403624280059219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030217966s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ff8f6f3f30d64dbd44181797a52d66d21ee28c0ae7639d5d1bdbffd3052c24be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40461 - 514 "HINFO IN 2475121785806463085.1107501801826590384. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005830505s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-439113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_52_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:52:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:17:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:17:10 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:17:10 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:17:10 +0000   Sat, 15 Nov 2025 10:52:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:17:10 +0000   Sat, 15 Nov 2025 11:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-439113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6518a9f9-bb2d-42ae-b78a-3db01b5306a4
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vddcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-4g6sm             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     24m
	  kube-system                 coredns-66bc5c9577-mlm6m             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     24m
	  kube-system                 etcd-ha-439113                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         24m
	  kube-system                 kindnet-q4kpj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24m
	  kube-system                 kube-apiserver-ha-439113             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-439113    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-k7bcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-439113             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-439113                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m31s                  kube-proxy       
	  Normal   Starting                 9m24s                  kube-proxy       
	  Normal   Starting                 24m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    24m                    kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24m                    kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   Starting                 24m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 24m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  24m                    kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           24m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           24m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeReady                24m                    kubelet          Node ha-439113 status is now: NodeReady
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeHasSufficientMemory  9m52s (x8 over 9m52s)  kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m52s (x8 over 9m52s)  kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m52s (x8 over 9m52s)  kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 9m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m52s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           9m26s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           9m12s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           8m35s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   NodeHasSufficientMemory  7m24s (x8 over 7m24s)  kubelet          Node ha-439113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m24s (x8 over 7m24s)  kubelet          Node ha-439113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m24s (x8 over 7m24s)  kubelet          Node ha-439113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m32s                  node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           5m1s                   node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-439113 event: Registered Node ha-439113 in Controller
	
	
	Name:               ha-439113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_53_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:53:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:17:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:16:11 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:16:11 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:16:11 +0000   Sat, 15 Nov 2025 11:08:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:16:11 +0000   Sat, 15 Nov 2025 11:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-439113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d3455c64-e9a7-4ebe-b716-3cc9dc8ab51a
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6x277                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 etcd-ha-439113-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         24m
	  kube-system                 kindnet-mcj42                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24m
	  kube-system                 kube-apiserver-ha-439113-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-439113-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-kgftx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-439113-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-439113-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   Starting                 9m2s                   kube-proxy       
	  Normal   Starting                 4m57s                  kube-proxy       
	  Normal   RegisteredNode           24m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           24m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           22m                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   NodeNotReady             18m                    node-controller  Node ha-439113-m02 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     9m48s (x8 over 9m48s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m48s (x8 over 9m48s)  kubelet          Node ha-439113-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m48s (x8 over 9m48s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m26s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           9m12s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           8m35s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   Starting                 7m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m20s (x8 over 7m20s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m20s (x8 over 7m20s)  kubelet          Node ha-439113-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m20s (x8 over 7m20s)  kubelet          Node ha-439113-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        6m20s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m32s                  node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           5m1s                   node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-439113-m02 event: Registered Node ha-439113-m02 in Controller
	
	
	Name:               ha-439113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T10_56_52_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:56:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:17:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:17:07 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:17:07 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:17:07 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:17:07 +0000   Sat, 15 Nov 2025 11:08:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-439113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                bf4456d3-e8dc-4a97-8e4f-cb829c9a4b90
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-trswm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kindnet-4k2k2               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-proxy-2fgtm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m1s                   kube-proxy       
	  Normal   Starting                 20m                    kube-proxy       
	  Normal   Starting                 8m22s                  kube-proxy       
	  Warning  CgroupV1                 20m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 20m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     20m (x3 over 20m)      kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasNoDiskPressure    20m (x3 over 20m)      kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  20m (x3 over 20m)      kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           20m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           20m                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeReady                19m                    kubelet          Node ha-439113-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m27s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           9m13s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   Starting                 8m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     8m44s (x8 over 8m47s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m44s (x8 over 8m47s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m44s (x8 over 8m47s)  kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             8m37s                  node-controller  Node ha-439113-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           8m36s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   Starting                 5m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           5m33s                  node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   NodeHasSufficientMemory  5m32s (x8 over 5m35s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m32s (x8 over 5m35s)  kubelet          Node ha-439113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m32s (x8 over 5m35s)  kubelet          Node ha-439113-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m2s                   node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-439113-m04 event: Registered Node ha-439113-m04 in Controller
	
	
	Name:               ha-439113-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-439113-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=ha-439113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T11_16_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:16:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-439113-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:17:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:17:28 +0000   Sat, 15 Nov 2025 11:16:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:17:28 +0000   Sat, 15 Nov 2025 11:16:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:17:28 +0000   Sat, 15 Nov 2025 11:16:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:17:28 +0000   Sat, 15 Nov 2025 11:17:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-439113-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                9519d12b-1381-46af-a5d0-67966195263b
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-439113-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         47s
	  kube-system                 kindnet-8nvpw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      48s
	  kube-system                 kube-apiserver-ha-439113-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-controller-manager-ha-439113-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-f6lmp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-ha-439113-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-vip-ha-439113-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        44s   kube-proxy       
	  Normal  RegisteredNode  47s   node-controller  Node ha-439113-m05 event: Registered Node ha-439113-m05 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node ha-439113-m05 event: Registered Node ha-439113-m05 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node ha-439113-m05 event: Registered Node ha-439113-m05 in Controller
	
	
	==> dmesg <==
	[Nov15 09:26] systemd-journald[225]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Nov15 09:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:30] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[  +0.057232] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[Nov15 10:39] overlayfs: idmapped layers are currently not supported
	[Nov15 10:52] overlayfs: idmapped layers are currently not supported
	[Nov15 10:53] overlayfs: idmapped layers are currently not supported
	[Nov15 10:54] overlayfs: idmapped layers are currently not supported
	[Nov15 10:56] overlayfs: idmapped layers are currently not supported
	[Nov15 10:58] overlayfs: idmapped layers are currently not supported
	[Nov15 11:07] overlayfs: idmapped layers are currently not supported
	[  +3.621339] overlayfs: idmapped layers are currently not supported
	[Nov15 11:08] overlayfs: idmapped layers are currently not supported
	[Nov15 11:09] overlayfs: idmapped layers are currently not supported
	[Nov15 11:10] overlayfs: idmapped layers are currently not supported
	[  +3.526164] overlayfs: idmapped layers are currently not supported
	[Nov15 11:12] overlayfs: idmapped layers are currently not supported
	[Nov15 11:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab0d0c34b46d585c39a39112a9d96382b3c2d54b036b01e5aabb4c9adb26fe48] <==
	{"level":"info","ts":"2025-11-15T11:16:32.477884Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"306d1fed790b9ab2"}
	{"level":"warn","ts":"2025-11-15T11:16:34.078818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:54404","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T11:16:34.836978Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:34.837034Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:34.837110Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:34.849306Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"306d1fed790b9ab2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-15T11:16:34.849348Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:34.937638Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"306d1fed790b9ab2","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-15T11:16:34.937677Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"warn","ts":"2025-11-15T11:16:35.235131Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:16:35.235251Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:16:35.400572Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"306d1fed790b9ab2","error":"failed to dial 306d1fed790b9ab2 on stream MsgApp v2 (peer 306d1fed790b9ab2 failed to find local node aec36adc501070cc)"}
	{"level":"info","ts":"2025-11-15T11:16:35.632972Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(1219917390783646217 3489480391080516274 12593026477526642892)"}
	{"level":"info","ts":"2025-11-15T11:16:35.633209Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:35.722613Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:35.722726Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:35.842046Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"306d1fed790b9ab2","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-11-15T11:16:35.842087Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:35.842098Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:35.842516Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:35.936720Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"306d1fed790b9ab2","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-11-15T11:16:35.936763Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"info","ts":"2025-11-15T11:16:35.936775Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"306d1fed790b9ab2"}
	{"level":"warn","ts":"2025-11-15T11:17:32.869124Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.468655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:368873"}
	{"level":"info","ts":"2025-11-15T11:17:32.869224Z","caller":"traceutil/trace.go:172","msg":"trace[45307404] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:5121; }","duration":"148.576274ms","start":"2025-11-15T11:17:32.720627Z","end":"2025-11-15T11:17:32.869203Z","steps":["trace[45307404] 'range keys from bolt db'  (duration: 146.986807ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:17:33 up  3:00,  0 user,  load average: 1.19, 1.28, 1.41
	Linux ha-439113 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [624e9c4484de9254bf51adb5f68cf3ee64fa67c57ec0731d0bf92706a6167a9c] <==
	I1115 11:16:58.422563       1 main.go:324] Node ha-439113-m05 has CIDR [10.244.2.0/24] 
	I1115 11:17:08.422584       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:17:08.422617       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:17:08.422973       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1115 11:17:08.422998       1 main.go:324] Node ha-439113-m05 has CIDR [10.244.2.0/24] 
	I1115 11:17:08.423417       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:17:08.423438       1 main.go:301] handling current node
	I1115 11:17:08.423453       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:17:08.423458       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:17:18.422272       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:17:18.422328       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:17:18.422495       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1115 11:17:18.422509       1 main.go:324] Node ha-439113-m05 has CIDR [10.244.2.0/24] 
	I1115 11:17:18.422575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:17:18.422587       1 main.go:301] handling current node
	I1115 11:17:18.422601       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:17:18.422606       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:17:28.421730       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 11:17:28.421763       1 main.go:301] handling current node
	I1115 11:17:28.421783       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 11:17:28.421789       1 main.go:324] Node ha-439113-m02 has CIDR [10.244.1.0/24] 
	I1115 11:17:28.421976       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 11:17:28.421993       1 main.go:324] Node ha-439113-m04 has CIDR [10.244.3.0/24] 
	I1115 11:17:28.422098       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1115 11:17:28.422160       1 main.go:324] Node ha-439113-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [c9aa769ac1e410d0690ad31ea1ef812bb7de4c70e937d471392caf66737a2862] <==
	{"level":"warn","ts":"2025-11-15T11:11:11.780145Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001588b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780169Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001d63860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780193Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780223Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780249Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780277Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40022752c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780304Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023e4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780333Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026c3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780359Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026c3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019be3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780406Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002bd4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780427Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002bd4780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780448Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fe960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780469Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001798960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780496Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001798960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780520Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015881e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780543Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019bed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780567Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025c4d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780589Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ce5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780615Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ce5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780660Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014fef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-15T11:11:11.780685Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400201a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1115 11:11:17.182112       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-11-15T11:11:17.353763Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400250af00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-apiserver [d24d48c3f9b01e8a715249be7330e6cfad6f59261b7723b5de70efa554928964] <==
	I1115 11:11:54.167816       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:11:54.174315       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:11:54.174482       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:11:54.197129       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 11:11:54.198171       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:11:54.225142       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 11:11:54.260659       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:11:54.275062       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:11:54.276988       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:11:54.298453       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:11:54.354535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1115 11:11:54.363129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1115 11:11:54.364714       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:11:54.378229       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:11:54.378262       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:11:54.378385       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:11:54.401493       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1115 11:11:54.415287       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1115 11:11:54.477155       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:11:54.477232       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:11:55.801917       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1115 11:11:56.437942       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1115 11:12:01.275927       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:12:31.830683       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 11:12:37.901647       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [45eb4921c003b25c5119ab01196399bab3eb8157fb07652ba3dcd97194afeb00] <==
	I1115 11:12:31.407984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 11:12:31.413011       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:12:31.413156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:12:31.418877       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 11:12:31.426163       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:12:31.428921       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 11:12:31.429031       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 11:12:31.429079       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:12:31.434013       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:12:31.441466       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:12:31.446524       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:12:31.449650       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:12:31.481074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:12:31.481105       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:12:31.481113       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:12:31.519964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 11:16:44.942103       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-wsrps failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-wsrps\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1115 11:16:44.942783       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-wsrps failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-wsrps\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1115 11:16:45.263015       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	I1115 11:16:45.263277       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-439113-m05\" does not exist"
	I1115 11:16:45.322489       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-439113-m05" podCIDRs=["10.244.2.0/24"]
	I1115 11:16:46.422583       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-439113-m05"
	E1115 11:16:48.795397       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"34fb978f-542d-4ab7-b285-c26d8e9a25fe\", ResourceVersion:\"4911\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 15, 10, 52, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400197aea0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"
\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSourc
e)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4002189a40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000abbc08), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000abbc20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x4002a96420)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Re
sourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Life
cycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x4002159f20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002066688), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000e37440), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tole
rationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001bf92a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020666e0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailab
le:3, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1115 11:17:28.630114       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-439113-m04"
	
	
	==> kube-controller-manager [be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41] <==
	I1115 11:11:30.275411       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:11:31.365181       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1115 11:11:31.365208       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:11:31.368367       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1115 11:11:31.370810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:11:31.370917       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1115 11:11:31.371086       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1115 11:11:41.387475       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [16ebc70b03ad38e3a7e5abff3cead02f628f4a722d181136401c1a8c416ae823] <==
	I1115 11:12:01.396280       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:12:01.491396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:12:01.592661       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:12:01.592701       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 11:12:01.592780       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:12:01.742121       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:12:01.742188       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:12:01.763218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:12:01.764138       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:12:01.764797       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:12:01.789051       1 config.go:200] "Starting service config controller"
	I1115 11:12:01.789146       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:12:01.789599       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:12:01.789660       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:12:01.789732       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:12:01.789761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:12:01.794216       1 config.go:309] "Starting node config controller"
	I1115 11:12:01.794306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:12:01.794337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:12:01.890300       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:12:01.890346       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:12:01.890389       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e0b918dd4970fd4deab2473f719156caad36c70e91836ec9407fd62c0e66c2f1] <==
	E1115 11:11:36.951331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:11:37.615660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:11:38.448988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:11:40.797158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:11:41.756113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:11:44.289532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1115 11:12:21.588534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1115 11:16:45.403312       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8nvpw\": pod kindnet-8nvpw is already assigned to node \"ha-439113-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-8nvpw" node="ha-439113-m05"
	E1115 11:16:45.403453       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ef7838ef-d143-47ee-8fd2-ef5f07f24b27(kube-system/kindnet-8nvpw) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8nvpw"
	E1115 11:16:45.403517       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8nvpw\": pod kindnet-8nvpw is already assigned to node \"ha-439113-m05\"" logger="UnhandledError" pod="kube-system/kindnet-8nvpw"
	I1115 11:16:45.404955       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8nvpw" node="ha-439113-m05"
	E1115 11:16:45.486153       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ddfsg\": pod kube-proxy-ddfsg is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kube-proxy-ddfsg" node="ha-439113-m05"
	E1115 11:16:45.486223       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ddfsg\": pod kube-proxy-ddfsg is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kube-proxy-ddfsg"
	E1115 11:16:45.494069       1 framework.go:1400] "Plugin Failed" err="pods \"kindnet-4c8l9\" not found" plugin="DefaultBinder" pod="kube-system/kindnet-4c8l9" node="ha-439113-m05"
	E1115 11:16:45.494142       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": pods \"kindnet-4c8l9\" not found" logger="UnhandledError" pod="kube-system/kindnet-4c8l9"
	I1115 11:16:45.494163       1 schedule_one.go:1086] "Pod doesn't exist in informer cache" pod="kube-system/kindnet-4c8l9" err="pod \"kindnet-4c8l9\" not found"
	E1115 11:16:45.509766       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kube-proxy-ddfsg\" not found" pod="kube-system/kube-proxy-ddfsg"
	E1115 11:16:45.520697       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-4c8l9\" not found" pod="kube-system/kindnet-4c8l9"
	E1115 11:16:45.621166       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tg572\": pod kindnet-tg572 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-tg572" node="ha-439113-m05"
	E1115 11:16:45.621247       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tg572\": pod kindnet-tg572 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kindnet-tg572"
	I1115 11:16:45.621277       1 schedule_one.go:1086] "Pod doesn't exist in informer cache" pod="kube-system/kindnet-tg572" err="pod \"kindnet-tg572\" not found"
	E1115 11:16:45.632380       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kindnet-tg572\" not found" pod="kube-system/kindnet-tg572"
	E1115 11:16:45.689189       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4r8l9\": pod kube-proxy-4r8l9 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kube-proxy-4r8l9" node="ha-439113-m05"
	E1115 11:16:45.689255       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4r8l9\": pod kube-proxy-4r8l9 is being deleted, cannot be assigned to a host" logger="UnhandledError" pod="kube-system/kube-proxy-4r8l9"
	E1115 11:16:45.724344       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kube-proxy-4r8l9\" not found" pod="kube-system/kube-proxy-4r8l9"
	
	
	==> kubelet <==
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844253     802 projected.go:196] Error preparing data for projected volume kube-api-access-sd5j8 for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844286     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8 podName:6a63ca66-7de2-40d8-96f0-a99da4ba3411 nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844277125 +0000 UTC m=+109.205504722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sd5j8" (UniqueName: "kubernetes.io/projected/6a63ca66-7de2-40d8-96f0-a99da4ba3411-kube-api-access-sd5j8") pod "storage-provisioner" (UID: "6a63ca66-7de2-40d8-96f0-a99da4ba3411") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844314     802 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844326     802 projected.go:196] Error preparing data for projected volume kube-api-access-5ghqb for pod default/busybox-7b57f96db7-vddcm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844354     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb podName:92adc10b-e910-45d1-8267-ee2e884d0dcc nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844345777 +0000 UTC m=+109.205573365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5ghqb" (UniqueName: "kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb") pod "busybox-7b57f96db7-vddcm" (UID: "92adc10b-e910-45d1-8267-ee2e884d0dcc") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844373     802 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844479     802 projected.go:196] Error preparing data for projected volume kube-api-access-b6xlh for pod kube-system/coredns-66bc5c9577-4g6sm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:56 ha-439113 kubelet[802]: E1115 11:11:56.844521     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh podName:9460f377-28d8-418c-9dab-9428dfbfca1d nodeName:}" failed. No retries permitted until 2025-11-15 11:11:57.844511856 +0000 UTC m=+109.205739445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b6xlh" (UniqueName: "kubernetes.io/projected/9460f377-28d8-418c-9dab-9428dfbfca1d-kube-api-access-b6xlh") pod "coredns-66bc5c9577-4g6sm" (UID: "9460f377-28d8-418c-9dab-9428dfbfca1d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:57 ha-439113 kubelet[802]: I1115 11:11:57.908131     802 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:11:58 ha-439113 kubelet[802]: W1115 11:11:58.358260     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04 WatchSource:0}: Error finding container 8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04: Status 404 returned error can't find the container with id 8504950f9102e2d3678db003685a9003674d358c2d886fa984b1f644a575da04
	Nov 15 11:11:58 ha-439113 kubelet[802]: W1115 11:11:58.418603     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280 WatchSource:0}: Error finding container d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280: Status 404 returned error can't find the container with id d43213c9afa20eab4c28068b149534132632427cb558bccbf02b8458b2dd0280
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.705715     802 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.705866     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4718f104-1eea-4e92-b339-dc6ae067eee3-kube-proxy podName:4718f104-1eea-4e92-b339-dc6ae067eee3 nodeName:}" failed. No retries permitted until 2025-11-15 11:12:00.70583574 +0000 UTC m=+112.067063329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4718f104-1eea-4e92-b339-dc6ae067eee3-kube-proxy") pod "kube-proxy-k7bcn" (UID: "4718f104-1eea-4e92-b339-dc6ae067eee3") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911022     802 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911067     802 projected.go:196] Error preparing data for projected volume kube-api-access-5ghqb for pod default/busybox-7b57f96db7-vddcm: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:11:58 ha-439113 kubelet[802]: E1115 11:11:58.911165     802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb podName:92adc10b-e910-45d1-8267-ee2e884d0dcc nodeName:}" failed. No retries permitted until 2025-11-15 11:12:00.91114076 +0000 UTC m=+112.272368357 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5ghqb" (UniqueName: "kubernetes.io/projected/92adc10b-e910-45d1-8267-ee2e884d0dcc-kube-api-access-5ghqb") pod "busybox-7b57f96db7-vddcm" (UID: "92adc10b-e910-45d1-8267-ee2e884d0dcc") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 11:12:00 ha-439113 kubelet[802]: I1115 11:12:00.852948     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:00 ha-439113 kubelet[802]: E1115 11:12:00.853132     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-439113_kube-system(61daecae9db4def537bd68f54312f1ae)\"" pod="kube-system/kube-controller-manager-ha-439113" podUID="61daecae9db4def537bd68f54312f1ae"
	Nov 15 11:12:01 ha-439113 kubelet[802]: W1115 11:12:01.080611     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio-b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397 WatchSource:0}: Error finding container b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397: Status 404 returned error can't find the container with id b9f35a414830a814a3c7874120d74394bc21adeb5906a90adb474cbab5a11397
	Nov 15 11:12:08 ha-439113 kubelet[802]: E1115 11:12:08.835937     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/54bc03e5aa3c6fcbbe6935a8420792c10e6b1241a59bf0fdde396399ed9639de/diff" to get inode usage: stat /var/lib/containers/storage/overlay/54bc03e5aa3c6fcbbe6935a8420792c10e6b1241a59bf0fdde396399ed9639de/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/3.log: no such file or directory
	Nov 15 11:12:08 ha-439113 kubelet[802]: E1115 11:12:08.849660     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eb045b83b5da536e46c3745bb2a8803b5c05df65a3052a5d8a939a5b61aff0de/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eb045b83b5da536e46c3745bb2a8803b5c05df65a3052a5d8a939a5b61aff0de/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-439113_61daecae9db4def537bd68f54312f1ae/kube-controller-manager/4.log: no such file or directory
	Nov 15 11:12:12 ha-439113 kubelet[802]: I1115 11:12:12.853172     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:12 ha-439113 kubelet[802]: E1115 11:12:12.853836     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-439113_kube-system(61daecae9db4def537bd68f54312f1ae)\"" pod="kube-system/kube-controller-manager-ha-439113" podUID="61daecae9db4def537bd68f54312f1ae"
	Nov 15 11:12:27 ha-439113 kubelet[802]: I1115 11:12:27.852165     802 scope.go:117] "RemoveContainer" containerID="be718981167470587e7edcab954bb28586e88b90bde200f9d703d4bf87527c41"
	Nov 15 11:12:28 ha-439113 kubelet[802]: I1115 11:12:28.302685     802 scope.go:117] "RemoveContainer" containerID="d86466a64c1754474a329490ff47ef2c868ab7ca5cee646b6d77e75e89205609"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-439113 -n ha-439113
helpers_test.go:269: (dbg) Run:  kubectl --context ha-439113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.02s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-979835 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-979835 --output=json --user=testUser: exit status 80 (2.496693901s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3469a19-2bb4-4292-a8e1-deee15ee1852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-979835 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"29705c91-cbac-4165-b319-3627fcf61840","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T11:18:39Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"9c6101de-d498-4015-a268-7322d20249f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-979835 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-979835 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-979835 --output=json --user=testUser: exit status 80 (1.593227252s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1013d509-efaf-4820-b65c-2d11412df1d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-979835 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"8731f093-8b60-4bff-87f9-669d552b91b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T11:18:41Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"d39a1bb2-841b-4ab7-8b44-df2a8b331280","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-979835 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.59s)

                                                
                                    
x
+
TestPause/serial/Pause (7.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-137857 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-137857 --alsologtostderr -v=5: exit status 80 (2.369053484s)

                                                
                                                
-- stdout --
	* Pausing node pause-137857 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:40:51.330720  751828 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:40:51.331576  751828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:40:51.331621  751828 out.go:374] Setting ErrFile to fd 2...
	I1115 11:40:51.331641  751828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:40:51.331948  751828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:40:51.332283  751828 out.go:368] Setting JSON to false
	I1115 11:40:51.332342  751828 mustload.go:66] Loading cluster: pause-137857
	I1115 11:40:51.332827  751828 config.go:182] Loaded profile config "pause-137857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:40:51.333510  751828 cli_runner.go:164] Run: docker container inspect pause-137857 --format={{.State.Status}}
	I1115 11:40:51.351361  751828 host.go:66] Checking if "pause-137857" exists ...
	I1115 11:40:51.351793  751828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:40:51.410230  751828 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:40:51.400194535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:40:51.410898  751828 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-137857 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 11:40:51.413922  751828 out.go:179] * Pausing node pause-137857 ... 
	I1115 11:40:51.417602  751828 host.go:66] Checking if "pause-137857" exists ...
	I1115 11:40:51.417961  751828 ssh_runner.go:195] Run: systemctl --version
	I1115 11:40:51.418016  751828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:51.436181  751828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:51.544280  751828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:40:51.557315  751828 pause.go:52] kubelet running: true
	I1115 11:40:51.557388  751828 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:40:51.784825  751828 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:40:51.784987  751828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:40:51.852679  751828 cri.go:89] found id: "0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1"
	I1115 11:40:51.852704  751828 cri.go:89] found id: "058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1"
	I1115 11:40:51.852709  751828 cri.go:89] found id: "a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e"
	I1115 11:40:51.852713  751828 cri.go:89] found id: "14380df9df23f9d41205f28106bd8a47807ea891d5c0d8a8f437a06ab753b04c"
	I1115 11:40:51.852716  751828 cri.go:89] found id: "86ddf29301be37859289c1c5f546685bc84187eeffd2b7f42158ec98d7a8b59f"
	I1115 11:40:51.852720  751828 cri.go:89] found id: "11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538"
	I1115 11:40:51.852723  751828 cri.go:89] found id: "753a589caf043fd7414736e947ca13435428a97c154d59c7685ee4e40b4cb298"
	I1115 11:40:51.852726  751828 cri.go:89] found id: "dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991"
	I1115 11:40:51.852730  751828 cri.go:89] found id: "f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2"
	I1115 11:40:51.852738  751828 cri.go:89] found id: "fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd"
	I1115 11:40:51.852742  751828 cri.go:89] found id: "6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a"
	I1115 11:40:51.852745  751828 cri.go:89] found id: "94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66"
	I1115 11:40:51.852749  751828 cri.go:89] found id: "2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	I1115 11:40:51.852753  751828 cri.go:89] found id: "de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	I1115 11:40:51.852756  751828 cri.go:89] found id: ""
	I1115 11:40:51.852810  751828 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:40:51.863677  751828 retry.go:31] will retry after 141.093217ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:40:51Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:40:52.005061  751828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:40:52.019709  751828 pause.go:52] kubelet running: false
	I1115 11:40:52.019776  751828 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:40:52.212273  751828 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:40:52.212404  751828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:40:52.285734  751828 cri.go:89] found id: "0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1"
	I1115 11:40:52.285759  751828 cri.go:89] found id: "058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1"
	I1115 11:40:52.285764  751828 cri.go:89] found id: "a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e"
	I1115 11:40:52.285768  751828 cri.go:89] found id: "14380df9df23f9d41205f28106bd8a47807ea891d5c0d8a8f437a06ab753b04c"
	I1115 11:40:52.285772  751828 cri.go:89] found id: "86ddf29301be37859289c1c5f546685bc84187eeffd2b7f42158ec98d7a8b59f"
	I1115 11:40:52.285776  751828 cri.go:89] found id: "11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538"
	I1115 11:40:52.285779  751828 cri.go:89] found id: "753a589caf043fd7414736e947ca13435428a97c154d59c7685ee4e40b4cb298"
	I1115 11:40:52.285783  751828 cri.go:89] found id: "dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991"
	I1115 11:40:52.285795  751828 cri.go:89] found id: "f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2"
	I1115 11:40:52.285801  751828 cri.go:89] found id: "fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd"
	I1115 11:40:52.285805  751828 cri.go:89] found id: "6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a"
	I1115 11:40:52.285808  751828 cri.go:89] found id: "94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66"
	I1115 11:40:52.285812  751828 cri.go:89] found id: "2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	I1115 11:40:52.285815  751828 cri.go:89] found id: "de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	I1115 11:40:52.285818  751828 cri.go:89] found id: ""
	I1115 11:40:52.285868  751828 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:40:52.296690  751828 retry.go:31] will retry after 283.041936ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:40:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:40:52.579974  751828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:40:52.593459  751828 pause.go:52] kubelet running: false
	I1115 11:40:52.593526  751828 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:40:52.736032  751828 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:40:52.736155  751828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:40:52.804849  751828 cri.go:89] found id: "0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1"
	I1115 11:40:52.804947  751828 cri.go:89] found id: "058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1"
	I1115 11:40:52.804953  751828 cri.go:89] found id: "a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e"
	I1115 11:40:52.804957  751828 cri.go:89] found id: "14380df9df23f9d41205f28106bd8a47807ea891d5c0d8a8f437a06ab753b04c"
	I1115 11:40:52.804960  751828 cri.go:89] found id: "86ddf29301be37859289c1c5f546685bc84187eeffd2b7f42158ec98d7a8b59f"
	I1115 11:40:52.804963  751828 cri.go:89] found id: "11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538"
	I1115 11:40:52.804967  751828 cri.go:89] found id: "753a589caf043fd7414736e947ca13435428a97c154d59c7685ee4e40b4cb298"
	I1115 11:40:52.804970  751828 cri.go:89] found id: "dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991"
	I1115 11:40:52.804992  751828 cri.go:89] found id: "f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2"
	I1115 11:40:52.805010  751828 cri.go:89] found id: "fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd"
	I1115 11:40:52.805014  751828 cri.go:89] found id: "6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a"
	I1115 11:40:52.805017  751828 cri.go:89] found id: "94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66"
	I1115 11:40:52.805021  751828 cri.go:89] found id: "2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	I1115 11:40:52.805031  751828 cri.go:89] found id: "de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	I1115 11:40:52.805039  751828 cri.go:89] found id: ""
	I1115 11:40:52.805107  751828 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:40:52.816227  751828 retry.go:31] will retry after 574.372777ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:40:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:40:53.391716  751828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:40:53.404829  751828 pause.go:52] kubelet running: false
	I1115 11:40:53.404952  751828 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:40:53.543054  751828 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:40:53.543133  751828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:40:53.618684  751828 cri.go:89] found id: "0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1"
	I1115 11:40:53.618765  751828 cri.go:89] found id: "058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1"
	I1115 11:40:53.618778  751828 cri.go:89] found id: "a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e"
	I1115 11:40:53.618782  751828 cri.go:89] found id: "14380df9df23f9d41205f28106bd8a47807ea891d5c0d8a8f437a06ab753b04c"
	I1115 11:40:53.618786  751828 cri.go:89] found id: "86ddf29301be37859289c1c5f546685bc84187eeffd2b7f42158ec98d7a8b59f"
	I1115 11:40:53.618789  751828 cri.go:89] found id: "11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538"
	I1115 11:40:53.618792  751828 cri.go:89] found id: "753a589caf043fd7414736e947ca13435428a97c154d59c7685ee4e40b4cb298"
	I1115 11:40:53.618795  751828 cri.go:89] found id: "dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991"
	I1115 11:40:53.618799  751828 cri.go:89] found id: "f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2"
	I1115 11:40:53.618809  751828 cri.go:89] found id: "fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd"
	I1115 11:40:53.618813  751828 cri.go:89] found id: "6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a"
	I1115 11:40:53.618832  751828 cri.go:89] found id: "94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66"
	I1115 11:40:53.618843  751828 cri.go:89] found id: "2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	I1115 11:40:53.618847  751828 cri.go:89] found id: "de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	I1115 11:40:53.618850  751828 cri.go:89] found id: ""
	I1115 11:40:53.618920  751828 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:40:53.633807  751828 out.go:203] 
	W1115 11:40:53.636846  751828 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:40:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:40:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 11:40:53.636978  751828 out.go:285] * 
	* 
	W1115 11:40:53.643394  751828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 11:40:53.646313  751828 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-137857 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-137857
helpers_test.go:243: (dbg) docker inspect pause-137857:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b",
	        "Created": "2025-11-15T11:39:07.197467192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 745885,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:39:07.266718564Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/hosts",
	        "LogPath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b-json.log",
	        "Name": "/pause-137857",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-137857:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-137857",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b",
	                "LowerDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-137857",
	                "Source": "/var/lib/docker/volumes/pause-137857/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-137857",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-137857",
	                "name.minikube.sigs.k8s.io": "pause-137857",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ce72cdb6131a0554e191e41e996d849511e993c0f38d63074495c459c416ac4",
	            "SandboxKey": "/var/run/docker/netns/1ce72cdb6131",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33764"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33765"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33766"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33767"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-137857": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:f1:b8:93:17:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b58a2a344df4b6dd1277b577b1a0f017e112da78547520a1bd00a5940fbcc581",
	                    "EndpointID": "ea51cb11043e0bb942bc71df458916a41eca371a450ff6ce4110329d859cab2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-137857",
	                        "8674ed18a672"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-137857 -n pause-137857
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-137857 -n pause-137857: exit status 2 (351.599019ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-137857 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-137857 logs -n 25: (1.435958374s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-505051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:35 UTC │ 15 Nov 25 11:35 UTC │
	│ start   │ -p missing-upgrade-028715 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-028715    │ jenkins │ v1.32.0 │ 15 Nov 25 11:35 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:35 UTC │ 15 Nov 25 11:36 UTC │
	│ delete  │ -p NoKubernetes-505051                                                                                                                   │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p missing-upgrade-028715 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-028715    │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:37 UTC │
	│ ssh     │ -p NoKubernetes-505051 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │                     │
	│ stop    │ -p NoKubernetes-505051                                                                                                                   │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p NoKubernetes-505051 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ ssh     │ -p NoKubernetes-505051 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │                     │
	│ delete  │ -p NoKubernetes-505051                                                                                                                   │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:37 UTC │
	│ delete  │ -p missing-upgrade-028715                                                                                                                │ missing-upgrade-028715    │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ start   │ -p stopped-upgrade-484617 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-484617    │ jenkins │ v1.32.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ stop    │ -p kubernetes-upgrade-436490                                                                                                             │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ start   │ -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │                     │
	│ stop    │ stopped-upgrade-484617 stop                                                                                                              │ stopped-upgrade-484617    │ jenkins │ v1.32.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ start   │ -p stopped-upgrade-484617 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-484617    │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:38 UTC │
	│ delete  │ -p stopped-upgrade-484617                                                                                                                │ stopped-upgrade-484617    │ jenkins │ v1.37.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:38 UTC │
	│ start   │ -p running-upgrade-165074 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-165074    │ jenkins │ v1.32.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:38 UTC │
	│ start   │ -p running-upgrade-165074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-165074    │ jenkins │ v1.37.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:38 UTC │
	│ delete  │ -p running-upgrade-165074                                                                                                                │ running-upgrade-165074    │ jenkins │ v1.37.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:39 UTC │
	│ start   │ -p pause-137857 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-137857              │ jenkins │ v1.37.0 │ 15 Nov 25 11:39 UTC │ 15 Nov 25 11:40 UTC │
	│ start   │ -p pause-137857 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-137857              │ jenkins │ v1.37.0 │ 15 Nov 25 11:40 UTC │ 15 Nov 25 11:40 UTC │
	│ pause   │ -p pause-137857 --alsologtostderr -v=5                                                                                                   │ pause-137857              │ jenkins │ v1.37.0 │ 15 Nov 25 11:40 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:40:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:40:22.343163  750044 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:40:22.343378  750044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:40:22.343405  750044 out.go:374] Setting ErrFile to fd 2...
	I1115 11:40:22.343424  750044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:40:22.343725  750044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:40:22.344116  750044 out.go:368] Setting JSON to false
	I1115 11:40:22.345153  750044 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12173,"bootTime":1763194649,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:40:22.345247  750044 start.go:143] virtualization:  
	I1115 11:40:22.348162  750044 out.go:179] * [pause-137857] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:40:22.351906  750044 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:40:22.351974  750044 notify.go:221] Checking for updates...
	I1115 11:40:22.357712  750044 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:40:22.360761  750044 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:40:22.363770  750044 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:40:22.366708  750044 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:40:22.369661  750044 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:40:22.373198  750044 config.go:182] Loaded profile config "pause-137857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:40:22.373765  750044 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:40:22.405387  750044 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:40:22.405503  750044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:40:22.465203  750044 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:40:22.453833463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:40:22.465330  750044 docker.go:319] overlay module found
	I1115 11:40:22.468560  750044 out.go:179] * Using the docker driver based on existing profile
	I1115 11:40:22.471593  750044 start.go:309] selected driver: docker
	I1115 11:40:22.471618  750044 start.go:930] validating driver "docker" against &{Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:40:22.471761  750044 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:40:22.471884  750044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:40:22.545682  750044 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:40:22.535348224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:40:22.546115  750044 cni.go:84] Creating CNI manager for ""
	I1115 11:40:22.546178  750044 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:40:22.546226  750044 start.go:353] cluster config:
	{Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:40:22.549561  750044 out.go:179] * Starting "pause-137857" primary control-plane node in "pause-137857" cluster
	I1115 11:40:22.552507  750044 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:40:22.555505  750044 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:40:22.558572  750044 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:40:22.558627  750044 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:40:22.558650  750044 cache.go:65] Caching tarball of preloaded images
	I1115 11:40:22.558662  750044 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:40:22.558734  750044 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:40:22.558744  750044 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:40:22.558882  750044 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/config.json ...
	I1115 11:40:22.579271  750044 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:40:22.579291  750044 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:40:22.579313  750044 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:40:22.579337  750044 start.go:360] acquireMachinesLock for pause-137857: {Name:mk9cd9983ffd468b7568b6b094e521a7bf0b03a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:40:22.579399  750044 start.go:364] duration metric: took 45.703µs to acquireMachinesLock for "pause-137857"
	I1115 11:40:22.579420  750044 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:40:22.579425  750044 fix.go:54] fixHost starting: 
	I1115 11:40:22.579693  750044 cli_runner.go:164] Run: docker container inspect pause-137857 --format={{.State.Status}}
	I1115 11:40:22.597789  750044 fix.go:112] recreateIfNeeded on pause-137857: state=Running err=<nil>
	W1115 11:40:22.597823  750044 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:40:22.924959  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:22.925418  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:22.925472  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:22.925533  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:22.971972  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:22.971998  735859 cri.go:89] found id: ""
	I1115 11:40:22.972008  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:22.972064  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:22.977163  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:22.977232  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:23.023168  735859 cri.go:89] found id: ""
	I1115 11:40:23.023202  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.023211  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:23.023217  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:23.023293  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:23.066000  735859 cri.go:89] found id: ""
	I1115 11:40:23.066029  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.066037  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:23.066049  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:23.066120  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:23.105079  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:23.105100  735859 cri.go:89] found id: ""
	I1115 11:40:23.105108  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:23.105170  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:23.110055  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:23.110126  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:23.149343  735859 cri.go:89] found id: ""
	I1115 11:40:23.149368  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.149376  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:23.149382  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:23.149445  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:23.185508  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:23.185533  735859 cri.go:89] found id: ""
	I1115 11:40:23.185542  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:23.185599  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:23.192501  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:23.192589  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:23.250483  735859 cri.go:89] found id: ""
	I1115 11:40:23.250509  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.250517  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:23.250524  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:23.250581  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:23.281401  735859 cri.go:89] found id: ""
	I1115 11:40:23.281428  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.281437  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:23.281445  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:23.281457  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:23.321061  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:23.321091  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:23.396329  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:23.396450  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:23.429480  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:23.429508  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:23.563912  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:23.563989  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:23.582858  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:23.582884  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:23.655218  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:23.655237  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:23.655250  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:23.695215  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:23.695286  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:26.272952  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:26.273403  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:26.273454  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:26.273516  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:26.299985  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:26.300007  735859 cri.go:89] found id: ""
	I1115 11:40:26.300015  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:26.300074  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:26.303666  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:26.303738  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:26.331616  735859 cri.go:89] found id: ""
	I1115 11:40:26.331639  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.331647  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:26.331654  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:26.331714  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:26.357926  735859 cri.go:89] found id: ""
	I1115 11:40:26.357950  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.357958  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:26.357964  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:26.358021  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:26.384014  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:26.384036  735859 cri.go:89] found id: ""
	I1115 11:40:26.384044  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:26.384109  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:26.387772  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:26.387868  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:26.413628  735859 cri.go:89] found id: ""
	I1115 11:40:26.413653  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.413662  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:26.413668  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:26.413726  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:26.443614  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:26.443677  735859 cri.go:89] found id: ""
	I1115 11:40:26.443698  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:26.443788  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:26.447658  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:26.447739  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:26.474955  735859 cri.go:89] found id: ""
	I1115 11:40:26.474980  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.474989  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:26.474995  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:26.475055  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:26.504742  735859 cri.go:89] found id: ""
	I1115 11:40:26.504765  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.504773  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:26.504781  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:26.504795  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:26.521914  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:26.521943  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:26.586030  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:26.586049  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:26.586063  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:26.621987  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:26.622018  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:26.680896  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:26.680929  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:26.706797  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:26.706832  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:26.761844  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:26.761881  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:26.796053  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:26.796080  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:22.600973  750044 out.go:252] * Updating the running docker "pause-137857" container ...
	I1115 11:40:22.601014  750044 machine.go:94] provisionDockerMachine start ...
	I1115 11:40:22.601114  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:22.618232  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:22.618553  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:22.618568  750044 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:40:22.768430  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137857
	
	I1115 11:40:22.768474  750044 ubuntu.go:182] provisioning hostname "pause-137857"
	I1115 11:40:22.768540  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:22.786189  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:22.786540  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:22.786561  750044 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-137857 && echo "pause-137857" | sudo tee /etc/hostname
	I1115 11:40:22.955653  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137857
	
	I1115 11:40:22.955734  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:22.990517  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:22.990828  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:22.990844  750044 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-137857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-137857/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-137857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:40:23.161704  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:40:23.161797  750044 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:40:23.161838  750044 ubuntu.go:190] setting up certificates
	I1115 11:40:23.161861  750044 provision.go:84] configureAuth start
	I1115 11:40:23.161941  750044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-137857
	I1115 11:40:23.186502  750044 provision.go:143] copyHostCerts
	I1115 11:40:23.186568  750044 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:40:23.186583  750044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:40:23.186659  750044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:40:23.186760  750044 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:40:23.186766  750044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:40:23.186794  750044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:40:23.186848  750044 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:40:23.186853  750044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:40:23.186876  750044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:40:23.186921  750044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.pause-137857 san=[127.0.0.1 192.168.85.2 localhost minikube pause-137857]
	I1115 11:40:23.788402  750044 provision.go:177] copyRemoteCerts
	I1115 11:40:23.788493  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:40:23.788552  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:23.806098  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:23.913191  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:40:23.931003  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:40:23.950141  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:40:23.968633  750044 provision.go:87] duration metric: took 806.735388ms to configureAuth
	I1115 11:40:23.968661  750044 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:40:23.968914  750044 config.go:182] Loaded profile config "pause-137857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:40:23.969030  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:23.986100  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:23.986421  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:23.986441  750044 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:40:29.334327  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:40:29.334350  750044 machine.go:97] duration metric: took 6.733326599s to provisionDockerMachine
	I1115 11:40:29.334361  750044 start.go:293] postStartSetup for "pause-137857" (driver="docker")
	I1115 11:40:29.334372  750044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:40:29.334445  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:40:29.334496  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.353124  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.458070  750044 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:40:29.462723  750044 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:40:29.462754  750044 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:40:29.462766  750044 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:40:29.462823  750044 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:40:29.462919  750044 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:40:29.463036  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:40:29.471724  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:40:29.496504  750044 start.go:296] duration metric: took 162.127566ms for postStartSetup
	I1115 11:40:29.496598  750044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:40:29.496649  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.516652  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.631339  750044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:40:29.638361  750044 fix.go:56] duration metric: took 7.058928115s for fixHost
	I1115 11:40:29.638384  750044 start.go:83] releasing machines lock for "pause-137857", held for 7.058975491s
	I1115 11:40:29.638452  750044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-137857
	I1115 11:40:29.658129  750044 ssh_runner.go:195] Run: cat /version.json
	I1115 11:40:29.658215  750044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:40:29.658284  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.658296  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.690186  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.698824  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.901603  750044 ssh_runner.go:195] Run: systemctl --version
	I1115 11:40:29.913413  750044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:40:29.982866  750044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:40:30.002376  750044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:40:30.002462  750044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:40:30.013723  750044 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:40:30.013748  750044 start.go:496] detecting cgroup driver to use...
	I1115 11:40:30.013787  750044 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:40:30.013849  750044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:40:30.037942  750044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:40:30.076803  750044 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:40:30.076897  750044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:40:30.123598  750044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:40:30.141756  750044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:40:30.345781  750044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:40:30.476196  750044 docker.go:234] disabling docker service ...
	I1115 11:40:30.476271  750044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:40:30.492043  750044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:40:30.506022  750044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:40:30.647299  750044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:40:30.787802  750044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:40:30.802257  750044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:40:30.818088  750044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:40:30.818169  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.828086  750044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:40:30.828172  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.838145  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.847943  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.858008  750044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:40:30.867305  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.877461  750044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.887099  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.896988  750044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:40:30.905481  750044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:40:30.913871  750044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:40:31.048036  750044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:40:31.245401  750044 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:40:31.245530  750044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:40:31.249775  750044 start.go:564] Will wait 60s for crictl version
	I1115 11:40:31.249869  750044 ssh_runner.go:195] Run: which crictl
	I1115 11:40:31.253524  750044 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:40:31.277149  750044 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:40:31.277272  750044 ssh_runner.go:195] Run: crio --version
	I1115 11:40:31.307071  750044 ssh_runner.go:195] Run: crio --version
	I1115 11:40:31.340938  750044 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:40:29.412911  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:29.413322  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:29.413365  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:29.413429  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:29.439848  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:29.439870  735859 cri.go:89] found id: ""
	I1115 11:40:29.439879  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:29.439940  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:29.443870  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:29.443950  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:29.481762  735859 cri.go:89] found id: ""
	I1115 11:40:29.481787  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.481797  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:29.481803  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:29.481861  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:29.524354  735859 cri.go:89] found id: ""
	I1115 11:40:29.524375  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.524384  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:29.524391  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:29.524451  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:29.565015  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:29.565036  735859 cri.go:89] found id: ""
	I1115 11:40:29.565044  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:29.565102  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:29.568815  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:29.568922  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:29.599959  735859 cri.go:89] found id: ""
	I1115 11:40:29.599983  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.599993  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:29.600005  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:29.600067  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:29.630278  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:29.630301  735859 cri.go:89] found id: ""
	I1115 11:40:29.630309  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:29.630364  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:29.635730  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:29.635803  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:29.682511  735859 cri.go:89] found id: ""
	I1115 11:40:29.682533  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.682542  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:29.682548  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:29.682605  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:29.724651  735859 cri.go:89] found id: ""
	I1115 11:40:29.724676  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.724685  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:29.724694  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:29.724705  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:29.806075  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:29.806096  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:29.806113  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:29.858235  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:29.858271  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:29.935792  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:29.935876  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:29.967571  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:29.967598  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:30.035481  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:30.035524  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:30.119914  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:30.120003  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:30.278357  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:30.278436  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:31.343867  750044 cli_runner.go:164] Run: docker network inspect pause-137857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:40:31.360553  750044 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:40:31.364716  750044 kubeadm.go:884] updating cluster {Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:40:31.364915  750044 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:40:31.364991  750044 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:40:31.397216  750044 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:40:31.397245  750044 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:40:31.397305  750044 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:40:31.429547  750044 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:40:31.429572  750044 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:40:31.429579  750044 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 11:40:31.429687  750044 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-137857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:40:31.429781  750044 ssh_runner.go:195] Run: crio config
	I1115 11:40:31.497029  750044 cni.go:84] Creating CNI manager for ""
	I1115 11:40:31.497052  750044 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:40:31.497069  750044 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:40:31.497112  750044 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-137857 NodeName:pause-137857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:40:31.497284  750044 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-137857"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:40:31.497365  750044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:40:31.506287  750044 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:40:31.506368  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:40:31.514487  750044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1115 11:40:31.527957  750044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:40:31.541904  750044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 11:40:31.555585  750044 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:40:31.559338  750044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:40:31.688678  750044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:40:31.702174  750044 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857 for IP: 192.168.85.2
	I1115 11:40:31.702195  750044 certs.go:195] generating shared ca certs ...
	I1115 11:40:31.702212  750044 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:40:31.702350  750044 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:40:31.702395  750044 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:40:31.702405  750044 certs.go:257] generating profile certs ...
	I1115 11:40:31.702491  750044 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.key
	I1115 11:40:31.702559  750044 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/apiserver.key.430a59de
	I1115 11:40:31.702600  750044 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/proxy-client.key
	I1115 11:40:31.702710  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:40:31.702747  750044 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:40:31.702763  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:40:31.702789  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:40:31.702814  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:40:31.702841  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:40:31.702887  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:40:31.703534  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:40:31.723309  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:40:31.743790  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:40:31.762728  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:40:31.781757  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 11:40:31.799336  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:40:31.817357  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:40:31.835575  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:40:31.853805  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:40:31.871855  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:40:31.889358  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:40:31.907436  750044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:40:31.920418  750044 ssh_runner.go:195] Run: openssl version
	I1115 11:40:31.926699  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:40:31.935189  750044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:40:31.939100  750044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:40:31.939168  750044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:40:31.982413  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:40:31.990315  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:40:32.000251  750044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:40:32.006977  750044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:40:32.007093  750044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:40:32.049001  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:40:32.057434  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:40:32.066494  750044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:40:32.070534  750044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:40:32.070638  750044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:40:32.112577  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:40:32.120833  750044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:40:32.124916  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:40:32.166455  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:40:32.208556  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:40:32.259331  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:40:32.307517  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:40:32.356011  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:40:32.408880  750044 kubeadm.go:401] StartCluster: {Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:40:32.409008  750044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:40:32.409095  750044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:40:32.509009  750044 cri.go:89] found id: "11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538"
	I1115 11:40:32.509076  750044 cri.go:89] found id: "dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991"
	I1115 11:40:32.509095  750044 cri.go:89] found id: "f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2"
	I1115 11:40:32.509113  750044 cri.go:89] found id: "fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd"
	I1115 11:40:32.509131  750044 cri.go:89] found id: "6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a"
	I1115 11:40:32.509148  750044 cri.go:89] found id: "94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66"
	I1115 11:40:32.509164  750044 cri.go:89] found id: "2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	I1115 11:40:32.509181  750044 cri.go:89] found id: "de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	I1115 11:40:32.509210  750044 cri.go:89] found id: ""
	I1115 11:40:32.509276  750044 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:40:32.527195  750044 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:40:32Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:40:32.527334  750044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:40:32.543579  750044 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:40:32.543647  750044 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:40:32.543715  750044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:40:32.565296  750044 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:40:32.565955  750044 kubeconfig.go:125] found "pause-137857" server: "https://192.168.85.2:8443"
	I1115 11:40:32.566826  750044 kapi.go:59] client config for pause-137857: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:40:32.567419  750044 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 11:40:32.567497  750044 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 11:40:32.567518  750044 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 11:40:32.567538  750044 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 11:40:32.567559  750044 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 11:40:32.567878  750044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:40:32.590251  750044 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 11:40:32.590326  750044 kubeadm.go:602] duration metric: took 46.657198ms to restartPrimaryControlPlane
	I1115 11:40:32.590349  750044 kubeadm.go:403] duration metric: took 181.496048ms to StartCluster
	I1115 11:40:32.590380  750044 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:40:32.590457  750044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:40:32.591331  750044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:40:32.591621  750044 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:40:32.592061  750044 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:40:32.592243  750044 config.go:182] Loaded profile config "pause-137857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:40:32.595350  750044 out.go:179] * Enabled addons: 
	I1115 11:40:32.595462  750044 out.go:179] * Verifying Kubernetes components...
	I1115 11:40:32.798274  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:32.798668  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:32.798715  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:32.798772  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:32.853912  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:32.853933  735859 cri.go:89] found id: ""
	I1115 11:40:32.853941  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:32.853999  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:32.858337  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:32.858417  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:32.897146  735859 cri.go:89] found id: ""
	I1115 11:40:32.897172  735859 logs.go:282] 0 containers: []
	W1115 11:40:32.897180  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:32.897186  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:32.897248  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:32.943386  735859 cri.go:89] found id: ""
	I1115 11:40:32.943412  735859 logs.go:282] 0 containers: []
	W1115 11:40:32.943421  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:32.943428  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:32.943492  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:32.981557  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:32.981580  735859 cri.go:89] found id: ""
	I1115 11:40:32.981588  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:32.981641  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:32.987644  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:32.987719  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:33.034215  735859 cri.go:89] found id: ""
	I1115 11:40:33.034241  735859 logs.go:282] 0 containers: []
	W1115 11:40:33.034250  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:33.034257  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:33.034324  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:33.084139  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:33.084163  735859 cri.go:89] found id: ""
	I1115 11:40:33.084174  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:33.084234  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:33.091933  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:33.092006  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:33.138117  735859 cri.go:89] found id: ""
	I1115 11:40:33.138143  735859 logs.go:282] 0 containers: []
	W1115 11:40:33.138152  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:33.138158  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:33.138221  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:33.194769  735859 cri.go:89] found id: ""
	I1115 11:40:33.194797  735859 logs.go:282] 0 containers: []
	W1115 11:40:33.194805  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:33.194814  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:33.194851  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:33.246395  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:33.246422  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:33.317521  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:33.317558  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:33.370857  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:33.370884  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:33.531366  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:33.531449  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:33.556181  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:33.556207  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:33.695772  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:33.695842  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:33.695869  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:33.769231  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:33.769267  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:36.351410  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:36.351780  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:36.351821  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:36.351873  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:36.406205  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:36.406224  735859 cri.go:89] found id: ""
	I1115 11:40:36.406232  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:36.406299  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:36.410401  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:36.410472  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:36.458741  735859 cri.go:89] found id: ""
	I1115 11:40:36.458811  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.458822  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:36.458830  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:36.458923  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:36.498987  735859 cri.go:89] found id: ""
	I1115 11:40:36.499060  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.499082  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:36.499104  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:36.499195  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:36.554202  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:36.554276  735859 cri.go:89] found id: ""
	I1115 11:40:36.554299  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:36.554392  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:36.561534  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:36.561615  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:36.603698  735859 cri.go:89] found id: ""
	I1115 11:40:36.603723  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.603732  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:36.603741  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:36.603797  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:36.641314  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:36.641337  735859 cri.go:89] found id: ""
	I1115 11:40:36.641345  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:36.641404  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:36.645615  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:36.645686  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:36.677592  735859 cri.go:89] found id: ""
	I1115 11:40:36.677617  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.677625  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:36.677631  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:36.677691  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:36.723093  735859 cri.go:89] found id: ""
	I1115 11:40:36.723119  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.723129  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:36.723137  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:36.723149  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:36.794752  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:36.794787  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:32.610081  750044 addons.go:515] duration metric: took 17.99325ms for enable addons: enabled=[]
	I1115 11:40:32.610255  750044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:40:32.898825  750044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:40:32.932431  750044 node_ready.go:35] waiting up to 6m0s for node "pause-137857" to be "Ready" ...
	I1115 11:40:36.891493  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:36.891572  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:36.930771  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:36.930848  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:37.016083  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:37.018564  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:37.099675  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:37.099755  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:37.245232  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:37.245265  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:37.273257  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:37.273282  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:37.396754  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:39.896986  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:37.791981  750044 node_ready.go:49] node "pause-137857" is "Ready"
	I1115 11:40:37.792015  750044 node_ready.go:38] duration metric: took 4.859544167s for node "pause-137857" to be "Ready" ...
	I1115 11:40:37.792031  750044 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:40:37.792094  750044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:40:37.810784  750044 api_server.go:72] duration metric: took 5.219096392s to wait for apiserver process to appear ...
	I1115 11:40:37.810811  750044 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:40:37.810831  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:37.855328  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 11:40:37.855358  750044 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 11:40:38.310892  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:38.321182  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:40:38.321209  750044 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:40:38.811366  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:38.819661  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:40:38.819693  750044 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:40:39.310946  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:39.320228  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 11:40:39.321646  750044 api_server.go:141] control plane version: v1.34.1
	I1115 11:40:39.321687  750044 api_server.go:131] duration metric: took 1.510867615s to wait for apiserver health ...
	I1115 11:40:39.321701  750044 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:40:39.325639  750044 system_pods.go:59] 7 kube-system pods found
	I1115 11:40:39.325685  750044 system_pods.go:61] "coredns-66bc5c9577-frrt2" [1267fcdc-111d-4540-bc10-4db6499c760a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:40:39.325700  750044 system_pods.go:61] "etcd-pause-137857" [7ed09d18-cbdf-4bd4-92f6-a794c81510a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:40:39.325706  750044 system_pods.go:61] "kindnet-gtpl9" [a93dc784-4bb8-4091-b97d-54dbd2773c1a] Running
	I1115 11:40:39.325714  750044 system_pods.go:61] "kube-apiserver-pause-137857" [f0fd7683-99e7-475a-a2c8-f0ac268f10a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:40:39.325724  750044 system_pods.go:61] "kube-controller-manager-pause-137857" [93aaba4f-8401-43b8-b65c-9bee4d6b801f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:40:39.325732  750044 system_pods.go:61] "kube-proxy-pfg9h" [669bdfff-ffd7-414a-8459-f937c2fa2162] Running
	I1115 11:40:39.325750  750044 system_pods.go:61] "kube-scheduler-pause-137857" [b2cdbf76-ec95-436f-990d-1434ac98d7be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:40:39.325762  750044 system_pods.go:74] duration metric: took 4.053554ms to wait for pod list to return data ...
	I1115 11:40:39.325771  750044 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:40:39.328322  750044 default_sa.go:45] found service account: "default"
	I1115 11:40:39.328385  750044 default_sa.go:55] duration metric: took 2.592538ms for default service account to be created ...
	I1115 11:40:39.328408  750044 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:40:39.331675  750044 system_pods.go:86] 7 kube-system pods found
	I1115 11:40:39.331742  750044 system_pods.go:89] "coredns-66bc5c9577-frrt2" [1267fcdc-111d-4540-bc10-4db6499c760a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:40:39.331778  750044 system_pods.go:89] "etcd-pause-137857" [7ed09d18-cbdf-4bd4-92f6-a794c81510a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:40:39.331798  750044 system_pods.go:89] "kindnet-gtpl9" [a93dc784-4bb8-4091-b97d-54dbd2773c1a] Running
	I1115 11:40:39.331826  750044 system_pods.go:89] "kube-apiserver-pause-137857" [f0fd7683-99e7-475a-a2c8-f0ac268f10a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:40:39.331857  750044 system_pods.go:89] "kube-controller-manager-pause-137857" [93aaba4f-8401-43b8-b65c-9bee4d6b801f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:40:39.331877  750044 system_pods.go:89] "kube-proxy-pfg9h" [669bdfff-ffd7-414a-8459-f937c2fa2162] Running
	I1115 11:40:39.331898  750044 system_pods.go:89] "kube-scheduler-pause-137857" [b2cdbf76-ec95-436f-990d-1434ac98d7be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:40:39.331935  750044 system_pods.go:126] duration metric: took 3.507761ms to wait for k8s-apps to be running ...
	I1115 11:40:39.331957  750044 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:40:39.332035  750044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:40:39.348716  750044 system_svc.go:56] duration metric: took 16.747218ms WaitForService to wait for kubelet
	I1115 11:40:39.348791  750044 kubeadm.go:587] duration metric: took 6.75710933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:40:39.348826  750044 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:40:39.351657  750044 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:40:39.351740  750044 node_conditions.go:123] node cpu capacity is 2
	I1115 11:40:39.351777  750044 node_conditions.go:105] duration metric: took 2.928319ms to run NodePressure ...
	I1115 11:40:39.351816  750044 start.go:242] waiting for startup goroutines ...
	I1115 11:40:39.351838  750044 start.go:247] waiting for cluster config update ...
	I1115 11:40:39.351866  750044 start.go:256] writing updated cluster config ...
	I1115 11:40:39.352248  750044 ssh_runner.go:195] Run: rm -f paused
	I1115 11:40:39.360249  750044 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:40:39.361072  750044 kapi.go:59] client config for pause-137857: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:40:39.364487  750044 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-frrt2" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:40:41.370289  750044 pod_ready.go:104] pod "coredns-66bc5c9577-frrt2" is not "Ready", error: <nil>
	I1115 11:40:44.897322  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1115 11:40:44.897396  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:44.897492  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:44.923250  735859 cri.go:89] found id: "1c83146f160646dec3ff9e163a428a9fa09ea338505f0dd5e3d2ced2d8113b55"
	I1115 11:40:44.923274  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:44.923279  735859 cri.go:89] found id: ""
	I1115 11:40:44.923286  735859 logs.go:282] 2 containers: [1c83146f160646dec3ff9e163a428a9fa09ea338505f0dd5e3d2ced2d8113b55 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:44.923345  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:44.927136  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:44.930718  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:44.930791  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:44.956772  735859 cri.go:89] found id: ""
	I1115 11:40:44.956797  735859 logs.go:282] 0 containers: []
	W1115 11:40:44.956806  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:44.956812  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:44.956902  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:44.983426  735859 cri.go:89] found id: ""
	I1115 11:40:44.983452  735859 logs.go:282] 0 containers: []
	W1115 11:40:44.983460  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:44.983467  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:44.983536  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:45.067007  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:45.067038  735859 cri.go:89] found id: ""
	I1115 11:40:45.067049  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:45.067117  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:45.076555  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:45.076643  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:45.169041  735859 cri.go:89] found id: ""
	I1115 11:40:45.169067  735859 logs.go:282] 0 containers: []
	W1115 11:40:45.169076  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:45.169083  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:45.169151  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:45.255539  735859 cri.go:89] found id: "ceaea6bfe285cb036171f548f24930a62493a452213c9dce8a316086c7fb819b"
	I1115 11:40:45.255641  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:45.255682  735859 cri.go:89] found id: ""
	I1115 11:40:45.255725  735859 logs.go:282] 2 containers: [ceaea6bfe285cb036171f548f24930a62493a452213c9dce8a316086c7fb819b e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:45.255890  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:45.267844  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:45.280083  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:45.280177  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:45.319965  735859 cri.go:89] found id: ""
	I1115 11:40:45.320003  735859 logs.go:282] 0 containers: []
	W1115 11:40:45.320013  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:45.320020  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:45.320150  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:45.348652  735859 cri.go:89] found id: ""
	I1115 11:40:45.348678  735859 logs.go:282] 0 containers: []
	W1115 11:40:45.348687  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:45.348702  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:45.348714  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:45.380190  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:45.380346  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:45.414174  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:45.414201  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:45.529962  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:45.529998  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:45.547467  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:45.547497  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:43.377411  750044 pod_ready.go:104] pod "coredns-66bc5c9577-frrt2" is not "Ready", error: <nil>
	I1115 11:40:44.870720  750044 pod_ready.go:94] pod "coredns-66bc5c9577-frrt2" is "Ready"
	I1115 11:40:44.870752  750044 pod_ready.go:86] duration metric: took 5.506192218s for pod "coredns-66bc5c9577-frrt2" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:44.873496  750044 pod_ready.go:83] waiting for pod "etcd-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:40:46.879307  750044 pod_ready.go:104] pod "etcd-pause-137857" is not "Ready", error: <nil>
	W1115 11:40:49.379719  750044 pod_ready.go:104] pod "etcd-pause-137857" is not "Ready", error: <nil>
	I1115 11:40:50.379499  750044 pod_ready.go:94] pod "etcd-pause-137857" is "Ready"
	I1115 11:40:50.379529  750044 pod_ready.go:86] duration metric: took 5.506007332s for pod "etcd-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.382203  750044 pod_ready.go:83] waiting for pod "kube-apiserver-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.386541  750044 pod_ready.go:94] pod "kube-apiserver-pause-137857" is "Ready"
	I1115 11:40:50.386614  750044 pod_ready.go:86] duration metric: took 4.384765ms for pod "kube-apiserver-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.388987  750044 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.393983  750044 pod_ready.go:94] pod "kube-controller-manager-pause-137857" is "Ready"
	I1115 11:40:50.394007  750044 pod_ready.go:86] duration metric: took 4.99332ms for pod "kube-controller-manager-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.396349  750044 pod_ready.go:83] waiting for pod "kube-proxy-pfg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.577738  750044 pod_ready.go:94] pod "kube-proxy-pfg9h" is "Ready"
	I1115 11:40:50.577765  750044 pod_ready.go:86] duration metric: took 181.391139ms for pod "kube-proxy-pfg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.778070  750044 pod_ready.go:83] waiting for pod "kube-scheduler-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:51.177037  750044 pod_ready.go:94] pod "kube-scheduler-pause-137857" is "Ready"
	I1115 11:40:51.177061  750044 pod_ready.go:86] duration metric: took 398.964178ms for pod "kube-scheduler-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:51.177074  750044 pod_ready.go:40] duration metric: took 11.816748848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:40:51.238613  750044 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:40:51.241826  750044 out.go:179] * Done! kubectl is now configured to use "pause-137857" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.66914554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.762680663Z" level=info msg="Created container 0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1: kube-system/kube-apiserver-pause-137857/kube-apiserver" id=800b206c-efe4-4cdb-8668-4d15be1dd626 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.763479504Z" level=info msg="Starting container: 0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1" id=f3abd657-9704-41c5-b8a2-b35df5575e55 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.766558076Z" level=info msg="Started container" PID=2388 containerID=0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1 description=kube-system/kube-apiserver-pause-137857/kube-apiserver id=f3abd657-9704-41c5-b8a2-b35df5575e55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9fa07442b56e274d46a78c78d1acd8da703f63d4d81901cede0ad975f94ce77
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.78821757Z" level=info msg="Created container 058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1: kube-system/etcd-pause-137857/etcd" id=32eb41a0-e90e-4e7f-b135-43e3eb32e0cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.789641778Z" level=info msg="Starting container: 058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1" id=e3715dc6-011b-4368-881f-0073e90e0b4a name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.789769336Z" level=info msg="Created container a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e: kube-system/kube-controller-manager-pause-137857/kube-controller-manager" id=2a825e93-326c-436e-a4ab-52869861567f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.790631973Z" level=info msg="Starting container: a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e" id=ce09ac2c-adce-4521-a4f2-edd09b6cac48 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.791880475Z" level=info msg="Started container" PID=2375 containerID=058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1 description=kube-system/etcd-pause-137857/etcd id=e3715dc6-011b-4368-881f-0073e90e0b4a name=/runtime.v1.RuntimeService/StartContainer sandboxID=943f4c184f8611ec94708c705265ddeb21a6d4ca00808c7dc65092c1cd983a99
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.796026009Z" level=info msg="Started container" PID=2382 containerID=a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e description=kube-system/kube-controller-manager-pause-137857/kube-controller-manager id=ce09ac2c-adce-4521-a4f2-edd09b6cac48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6fafba999cfae38323e1f391d1d26ff8cce13fbc06e61ab15e62e7589519eb7a
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.925586945Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.930514065Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.930552728Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.930575366Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.93406542Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.934107037Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.934128272Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.939649284Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.939688767Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.939710503Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.944609594Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.944648347Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.944671658Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.951349947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.951389299Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0edaa841d5b54       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   b9fa07442b56e       kube-apiserver-pause-137857            kube-system
	058db932812e6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   943f4c184f861       etcd-pause-137857                      kube-system
	a2e23ebc9fd1b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   6fafba999cfae       kube-controller-manager-pause-137857   kube-system
	14380df9df23f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   7f68f02aedddb       kube-scheduler-pause-137857            kube-system
	86ddf29301be3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   f4b2aead6dd93       coredns-66bc5c9577-frrt2               kube-system
	11fc878711b4b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   2de5d237b7269       kube-proxy-pfg9h                       kube-system
	753a589caf043       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   d50493133689a       kindnet-gtpl9                          kube-system
	dd0616de4773a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   f4b2aead6dd93       coredns-66bc5c9577-frrt2               kube-system
	f987d39fb8e95       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   2de5d237b7269       kube-proxy-pfg9h                       kube-system
	fdd5538b2f7f7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   d50493133689a       kindnet-gtpl9                          kube-system
	6acd4d6f33ed4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   7f68f02aedddb       kube-scheduler-pause-137857            kube-system
	94bac5dfed4e1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   6fafba999cfae       kube-controller-manager-pause-137857   kube-system
	2d26f0dee211f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b9fa07442b56e       kube-apiserver-pause-137857            kube-system
	de91188330d1a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   943f4c184f861       etcd-pause-137857                      kube-system
	
	
	==> coredns [86ddf29301be37859289c1c5f546685bc84187eeffd2b7f42158ec98d7a8b59f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59674 - 15543 "HINFO IN 6713335303970030121.2701297401939487917. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01368394s
	
	
	==> coredns [dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50912 - 14507 "HINFO IN 4710523890366550557.2551221312603868006. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026977444s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-137857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-137857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=pause-137857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_39_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:39:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-137857
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:40:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:39:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:39:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:39:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:40:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-137857
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                5b75f958-fccb-41b6-88bf-5a1b0ef1e957
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-frrt2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-137857                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-gtpl9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-137857             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-137857    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-pfg9h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-137857             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientPID     90s (x8 over 90s)  kubelet          Node pause-137857 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 90s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-137857 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-137857 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 90s                kubelet          Starting kubelet.
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-137857 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-137857 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-137857 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-137857 event: Registered Node pause-137857 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-137857 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-137857 event: Registered Node pause-137857 in Controller
	
	
	==> dmesg <==
	[Nov15 11:08] overlayfs: idmapped layers are currently not supported
	[Nov15 11:09] overlayfs: idmapped layers are currently not supported
	[Nov15 11:10] overlayfs: idmapped layers are currently not supported
	[  +3.526164] overlayfs: idmapped layers are currently not supported
	[Nov15 11:12] overlayfs: idmapped layers are currently not supported
	[Nov15 11:16] overlayfs: idmapped layers are currently not supported
	[Nov15 11:18] overlayfs: idmapped layers are currently not supported
	[Nov15 11:22] overlayfs: idmapped layers are currently not supported
	[Nov15 11:23] overlayfs: idmapped layers are currently not supported
	[Nov15 11:24] overlayfs: idmapped layers are currently not supported
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1] <==
	{"level":"warn","ts":"2025-11-15T11:40:35.752701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.772891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.800886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.836893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.845603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.857437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.879506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.898353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.911393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.933345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.957019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.966637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.990010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.010938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.023164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.047386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.063581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.077726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.141439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.163760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.175397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.208430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.241318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.256978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.309085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	
	
	==> etcd [de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d] <==
	{"level":"warn","ts":"2025-11-15T11:39:28.412391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.462892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.484286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.517834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.549969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.720221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32904","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T11:39:38.450153Z","caller":"traceutil/trace.go:172","msg":"trace[1373993601] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"116.714311ms","start":"2025-11-15T11:39:38.333423Z","end":"2025-11-15T11:39:38.450137Z","steps":["trace[1373993601] 'process raft request'  (duration: 84.375694ms)","trace[1373993601] 'compare'  (duration: 31.858121ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T11:40:24.163402Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T11:40:24.163468Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-137857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-15T11:40:24.163569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T11:40:24.306963Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T11:40:24.307063Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T11:40:24.307087Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-15T11:40:24.307173Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-15T11:40:24.307200Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307257Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307326Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T11:40:24.307361Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307428Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307445Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T11:40:24.307453Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T11:40:24.310569Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-15T11:40:24.310648Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T11:40:24.310716Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:40:24.310741Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-137857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:40:54 up  3:23,  0 user,  load average: 2.17, 3.05, 2.46
	Linux pause-137857 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [753a589caf043fd7414736e947ca13435428a97c154d59c7685ee4e40b4cb298] <==
	I1115 11:40:32.619191       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:40:32.624283       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:40:32.624436       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:40:32.624449       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:40:32.624461       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:40:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1115 11:40:32.928322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 11:40:32.928729       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:40:32.928740       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:40:32.928749       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:40:32.929074       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:40:32.929198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:40:32.929278       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:40:32.929602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 11:40:38.030478       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:40:38.030603       1 metrics.go:72] Registering metrics
	I1115 11:40:38.030695       1 controller.go:711] "Syncing nftables rules"
	I1115 11:40:42.925208       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:40:42.925257       1 main.go:301] handling current node
	I1115 11:40:52.927003       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:40:52.927049       1 main.go:301] handling current node
	
	
	==> kindnet [fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd] <==
	I1115 11:39:39.299419       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:39:39.299851       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:39:39.300014       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:39:39.300056       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:39:39.300098       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:39:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:39:39.498309       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:39:39.498337       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:39:39.498347       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:39:39.498451       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:40:09.499243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:40:09.499249       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:40:09.499355       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:40:09.499500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 11:40:10.898526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:40:10.898577       1 metrics.go:72] Registering metrics
	I1115 11:40:10.898654       1 controller.go:711] "Syncing nftables rules"
	I1115 11:40:19.505118       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:40:19.505181       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1] <==
	I1115 11:40:37.886800       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:40:37.899800       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:40:37.905544       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 11:40:37.905638       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:40:37.905840       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:40:37.905907       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:40:37.911114       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 11:40:37.911145       1 policy_source.go:240] refreshing policies
	I1115 11:40:37.920686       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:40:37.920763       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:40:37.920794       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:40:37.920823       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:40:37.942140       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 11:40:37.942381       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:40:37.942437       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:40:37.947764       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:40:37.947989       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 11:40:37.960417       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:40:37.970419       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:40:38.582326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:40:39.665633       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:40:41.066685       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:40:41.264346       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:40:41.313278       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:40:41.465890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f] <==
	W1115 11:40:24.179668       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179710       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179754       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179797       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179879       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.181888       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.181967       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182134       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182188       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182693       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182772       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182832       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182886       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183121       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183161       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183205       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183239       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183272       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183827       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183880       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183934       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184219       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184270       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184327       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184366       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66] <==
	I1115 11:39:37.514043       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:39:37.522038       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:39:37.511805       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 11:39:37.528428       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:39:37.528615       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-137857"
	I1115 11:39:37.528717       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:39:37.528787       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:39:37.528831       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:39:37.511816       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:39:37.535234       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 11:39:37.542645       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:39:37.535381       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:39:37.535495       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 11:39:37.535935       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-137857" podCIDRs=["10.244.0.0/24"]
	I1115 11:39:37.535484       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:39:37.549071       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:39:37.560252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:39:37.560275       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:39:37.560285       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:39:37.565370       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:39:37.565685       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:39:37.565836       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:39:37.567136       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:39:37.573731       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:40:22.535056       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e] <==
	I1115 11:40:41.066874       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:40:41.066960       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:40:41.069683       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:40:41.072703       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:40:41.073052       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 11:40:41.076937       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:40:41.080161       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 11:40:41.086534       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 11:40:41.086613       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 11:40:41.086660       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 11:40:41.086677       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 11:40:41.086684       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 11:40:41.089870       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:40:41.093425       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 11:40:41.106464       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:40:41.106464       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 11:40:41.106589       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:40:41.107581       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:40:41.107637       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 11:40:41.107669       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 11:40:41.111023       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:40:41.112469       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:40:41.134791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:40:41.134819       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:40:41.134836       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538] <==
	I1115 11:40:32.636973       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:40:33.800887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:40:37.954365       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:40:37.974163       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:40:37.974332       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:40:38.215526       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:40:38.215592       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:40:38.288962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:40:38.294296       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:40:38.344894       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:40:38.406305       1 config.go:200] "Starting service config controller"
	I1115 11:40:38.407632       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:40:38.407767       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:40:38.407798       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:40:38.407834       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:40:38.407862       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:40:38.408532       1 config.go:309] "Starting node config controller"
	I1115 11:40:38.415029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:40:38.415127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:40:38.507885       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:40:38.508843       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:40:38.513916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2] <==
	I1115 11:39:40.389442       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:39:40.482440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:39:40.582892       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:39:40.583026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:39:40.583161       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:39:40.602940       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:39:40.603060       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:39:40.607333       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:39:40.607708       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:39:40.608115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:39:40.611423       1 config.go:200] "Starting service config controller"
	I1115 11:39:40.611501       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:39:40.611539       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:39:40.611577       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:39:40.611609       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:39:40.611634       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:39:40.612598       1 config.go:309] "Starting node config controller"
	I1115 11:39:40.612690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:39:40.612723       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:39:40.712383       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:39:40.712480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:39:40.712499       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14380df9df23f9d41205f28106bd8a47807ea891d5c0d8a8f437a06ab753b04c] <==
	I1115 11:40:35.004363       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:40:38.585328       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 11:40:38.585429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:40:38.592434       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:40:38.592521       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 11:40:38.592561       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 11:40:38.592590       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:40:38.609611       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:38.609643       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:38.609664       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:40:38.609670       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:40:38.692679       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 11:40:38.710138       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:40:38.710265       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a] <==
	E1115 11:39:30.889272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:39:30.897954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:39:30.898121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:39:30.898237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:39:30.898278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:39:30.898362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:39:30.898404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:39:30.898457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:39:30.898519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:39:30.898538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:39:30.898632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:39:30.898675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:39:30.898742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:39:30.898749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:39:30.898789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:39:30.898898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:39:30.898887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:39:30.898950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1115 11:39:32.080460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:24.169343       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 11:40:24.169374       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 11:40:24.169398       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 11:40:24.169426       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:24.169635       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 11:40:24.169652       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.533841    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfg9h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.533977    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-frrt2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1267fcdc-111d-4540-bc10-4db6499c760a" pod="kube-system/coredns-66bc5c9577-frrt2"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.534111    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6ce87924d4e6aec5abfbf3b1f82d6cde" pod="kube-system/etcd-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: I1115 11:40:32.596832    1307 scope.go:117] "RemoveContainer" containerID="de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.597820    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6ce87924d4e6aec5abfbf3b1f82d6cde" pod="kube-system/etcd-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598020    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97ccbb7cf4e8e6e0045f2479434e619b" pod="kube-system/kube-apiserver-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598178    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a843aecd7cdae402f31837f9ba53da77" pod="kube-system/kube-controller-manager-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598314    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d197660a217ec3c231e642bc19a69329" pod="kube-system/kube-scheduler-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598452    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gtpl9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a93dc784-4bb8-4091-b97d-54dbd2773c1a" pod="kube-system/kindnet-gtpl9"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598585    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfg9h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598718    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-frrt2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1267fcdc-111d-4540-bc10-4db6499c760a" pod="kube-system/coredns-66bc5c9577-frrt2"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: I1115 11:40:32.601722    1307 scope.go:117] "RemoveContainer" containerID="2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.602332    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-frrt2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1267fcdc-111d-4540-bc10-4db6499c760a" pod="kube-system/coredns-66bc5c9577-frrt2"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.602612    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6ce87924d4e6aec5abfbf3b1f82d6cde" pod="kube-system/etcd-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.602867    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97ccbb7cf4e8e6e0045f2479434e619b" pod="kube-system/kube-apiserver-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603122    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a843aecd7cdae402f31837f9ba53da77" pod="kube-system/kube-controller-manager-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603367    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d197660a217ec3c231e642bc19a69329" pod="kube-system/kube-scheduler-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603606    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gtpl9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a93dc784-4bb8-4091-b97d-54dbd2773c1a" pod="kube-system/kindnet-gtpl9"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603849    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfg9h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:37 pause-137857 kubelet[1307]: E1115 11:40:37.649912    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-pfg9h\" is forbidden: User \"system:node:pause-137857\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-137857' and this object" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:37 pause-137857 kubelet[1307]: E1115 11:40:37.650609    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-137857\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-137857' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 15 11:40:43 pause-137857 kubelet[1307]: W1115 11:40:43.477802    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 11:40:51 pause-137857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:40:51 pause-137857 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:40:51 pause-137857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-137857 -n pause-137857
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-137857 -n pause-137857: exit status 2 (455.225688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-137857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-137857
helpers_test.go:243: (dbg) docker inspect pause-137857:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b",
	        "Created": "2025-11-15T11:39:07.197467192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 745885,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:39:07.266718564Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/hosts",
	        "LogPath": "/var/lib/docker/containers/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b/8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b-json.log",
	        "Name": "/pause-137857",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-137857:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-137857",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8674ed18a6724777a5d4d93bcc57e06d88527d455d45b24b2e28a09d10ca3e1b",
	                "LowerDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cff7736baab34166bf7b3a9ffae054047167784cbda37d15d9aabf387b7fca8a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-137857",
	                "Source": "/var/lib/docker/volumes/pause-137857/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-137857",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-137857",
	                "name.minikube.sigs.k8s.io": "pause-137857",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ce72cdb6131a0554e191e41e996d849511e993c0f38d63074495c459c416ac4",
	            "SandboxKey": "/var/run/docker/netns/1ce72cdb6131",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33764"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33765"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33766"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33767"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-137857": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:f1:b8:93:17:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b58a2a344df4b6dd1277b577b1a0f017e112da78547520a1bd00a5940fbcc581",
	                    "EndpointID": "ea51cb11043e0bb942bc71df458916a41eca371a450ff6ce4110329d859cab2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-137857",
	                        "8674ed18a672"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-137857 -n pause-137857
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-137857 -n pause-137857: exit status 2 (357.546867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-137857 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-137857 logs -n 25: (1.368154328s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-505051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:35 UTC │ 15 Nov 25 11:35 UTC │
	│ start   │ -p missing-upgrade-028715 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-028715    │ jenkins │ v1.32.0 │ 15 Nov 25 11:35 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:35 UTC │ 15 Nov 25 11:36 UTC │
	│ delete  │ -p NoKubernetes-505051                                                                                                                   │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p missing-upgrade-028715 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-028715    │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:37 UTC │
	│ ssh     │ -p NoKubernetes-505051 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │                     │
	│ stop    │ -p NoKubernetes-505051                                                                                                                   │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p NoKubernetes-505051 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ ssh     │ -p NoKubernetes-505051 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │                     │
	│ delete  │ -p NoKubernetes-505051                                                                                                                   │ NoKubernetes-505051       │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:36 UTC │
	│ start   │ -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:36 UTC │ 15 Nov 25 11:37 UTC │
	│ delete  │ -p missing-upgrade-028715                                                                                                                │ missing-upgrade-028715    │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ start   │ -p stopped-upgrade-484617 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-484617    │ jenkins │ v1.32.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ stop    │ -p kubernetes-upgrade-436490                                                                                                             │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ start   │ -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │                     │
	│ stop    │ stopped-upgrade-484617 stop                                                                                                              │ stopped-upgrade-484617    │ jenkins │ v1.32.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:37 UTC │
	│ start   │ -p stopped-upgrade-484617 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-484617    │ jenkins │ v1.37.0 │ 15 Nov 25 11:37 UTC │ 15 Nov 25 11:38 UTC │
	│ delete  │ -p stopped-upgrade-484617                                                                                                                │ stopped-upgrade-484617    │ jenkins │ v1.37.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:38 UTC │
	│ start   │ -p running-upgrade-165074 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-165074    │ jenkins │ v1.32.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:38 UTC │
	│ start   │ -p running-upgrade-165074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-165074    │ jenkins │ v1.37.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:38 UTC │
	│ delete  │ -p running-upgrade-165074                                                                                                                │ running-upgrade-165074    │ jenkins │ v1.37.0 │ 15 Nov 25 11:38 UTC │ 15 Nov 25 11:39 UTC │
	│ start   │ -p pause-137857 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-137857              │ jenkins │ v1.37.0 │ 15 Nov 25 11:39 UTC │ 15 Nov 25 11:40 UTC │
	│ start   │ -p pause-137857 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-137857              │ jenkins │ v1.37.0 │ 15 Nov 25 11:40 UTC │ 15 Nov 25 11:40 UTC │
	│ pause   │ -p pause-137857 --alsologtostderr -v=5                                                                                                   │ pause-137857              │ jenkins │ v1.37.0 │ 15 Nov 25 11:40 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:40:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:40:22.343163  750044 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:40:22.343378  750044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:40:22.343405  750044 out.go:374] Setting ErrFile to fd 2...
	I1115 11:40:22.343424  750044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:40:22.343725  750044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:40:22.344116  750044 out.go:368] Setting JSON to false
	I1115 11:40:22.345153  750044 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12173,"bootTime":1763194649,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:40:22.345247  750044 start.go:143] virtualization:  
	I1115 11:40:22.348162  750044 out.go:179] * [pause-137857] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:40:22.351906  750044 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:40:22.351974  750044 notify.go:221] Checking for updates...
	I1115 11:40:22.357712  750044 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:40:22.360761  750044 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:40:22.363770  750044 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:40:22.366708  750044 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:40:22.369661  750044 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:40:22.373198  750044 config.go:182] Loaded profile config "pause-137857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:40:22.373765  750044 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:40:22.405387  750044 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:40:22.405503  750044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:40:22.465203  750044 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:40:22.453833463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:40:22.465330  750044 docker.go:319] overlay module found
	I1115 11:40:22.468560  750044 out.go:179] * Using the docker driver based on existing profile
	I1115 11:40:22.471593  750044 start.go:309] selected driver: docker
	I1115 11:40:22.471618  750044 start.go:930] validating driver "docker" against &{Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:40:22.471761  750044 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:40:22.471884  750044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:40:22.545682  750044 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:40:22.535348224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:40:22.546115  750044 cni.go:84] Creating CNI manager for ""
	I1115 11:40:22.546178  750044 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:40:22.546226  750044 start.go:353] cluster config:
	{Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:40:22.549561  750044 out.go:179] * Starting "pause-137857" primary control-plane node in "pause-137857" cluster
	I1115 11:40:22.552507  750044 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:40:22.555505  750044 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:40:22.558572  750044 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:40:22.558627  750044 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:40:22.558650  750044 cache.go:65] Caching tarball of preloaded images
	I1115 11:40:22.558662  750044 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:40:22.558734  750044 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:40:22.558744  750044 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:40:22.558882  750044 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/config.json ...
	I1115 11:40:22.579271  750044 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:40:22.579291  750044 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:40:22.579313  750044 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:40:22.579337  750044 start.go:360] acquireMachinesLock for pause-137857: {Name:mk9cd9983ffd468b7568b6b094e521a7bf0b03a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:40:22.579399  750044 start.go:364] duration metric: took 45.703µs to acquireMachinesLock for "pause-137857"
	I1115 11:40:22.579420  750044 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:40:22.579425  750044 fix.go:54] fixHost starting: 
	I1115 11:40:22.579693  750044 cli_runner.go:164] Run: docker container inspect pause-137857 --format={{.State.Status}}
	I1115 11:40:22.597789  750044 fix.go:112] recreateIfNeeded on pause-137857: state=Running err=<nil>
	W1115 11:40:22.597823  750044 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:40:22.924959  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:22.925418  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:22.925472  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:22.925533  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:22.971972  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:22.971998  735859 cri.go:89] found id: ""
	I1115 11:40:22.972008  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:22.972064  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:22.977163  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:22.977232  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:23.023168  735859 cri.go:89] found id: ""
	I1115 11:40:23.023202  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.023211  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:23.023217  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:23.023293  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:23.066000  735859 cri.go:89] found id: ""
	I1115 11:40:23.066029  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.066037  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:23.066049  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:23.066120  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:23.105079  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:23.105100  735859 cri.go:89] found id: ""
	I1115 11:40:23.105108  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:23.105170  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:23.110055  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:23.110126  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:23.149343  735859 cri.go:89] found id: ""
	I1115 11:40:23.149368  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.149376  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:23.149382  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:23.149445  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:23.185508  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:23.185533  735859 cri.go:89] found id: ""
	I1115 11:40:23.185542  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:23.185599  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:23.192501  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:23.192589  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:23.250483  735859 cri.go:89] found id: ""
	I1115 11:40:23.250509  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.250517  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:23.250524  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:23.250581  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:23.281401  735859 cri.go:89] found id: ""
	I1115 11:40:23.281428  735859 logs.go:282] 0 containers: []
	W1115 11:40:23.281437  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:23.281445  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:23.281457  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:23.321061  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:23.321091  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:23.396329  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:23.396450  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:23.429480  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:23.429508  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:23.563912  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:23.563989  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:23.582858  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:23.582884  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:23.655218  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:23.655237  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:23.655250  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:23.695215  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:23.695286  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:26.272952  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:26.273403  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:26.273454  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:26.273516  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:26.299985  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:26.300007  735859 cri.go:89] found id: ""
	I1115 11:40:26.300015  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:26.300074  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:26.303666  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:26.303738  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:26.331616  735859 cri.go:89] found id: ""
	I1115 11:40:26.331639  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.331647  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:26.331654  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:26.331714  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:26.357926  735859 cri.go:89] found id: ""
	I1115 11:40:26.357950  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.357958  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:26.357964  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:26.358021  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:26.384014  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:26.384036  735859 cri.go:89] found id: ""
	I1115 11:40:26.384044  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:26.384109  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:26.387772  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:26.387868  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:26.413628  735859 cri.go:89] found id: ""
	I1115 11:40:26.413653  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.413662  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:26.413668  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:26.413726  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:26.443614  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:26.443677  735859 cri.go:89] found id: ""
	I1115 11:40:26.443698  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:26.443788  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:26.447658  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:26.447739  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:26.474955  735859 cri.go:89] found id: ""
	I1115 11:40:26.474980  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.474989  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:26.474995  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:26.475055  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:26.504742  735859 cri.go:89] found id: ""
	I1115 11:40:26.504765  735859 logs.go:282] 0 containers: []
	W1115 11:40:26.504773  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:26.504781  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:26.504795  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:26.521914  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:26.521943  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:26.586030  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:26.586049  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:26.586063  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:26.621987  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:26.622018  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:26.680896  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:26.680929  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:26.706797  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:26.706832  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:26.761844  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:26.761881  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:26.796053  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:26.796080  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:22.600973  750044 out.go:252] * Updating the running docker "pause-137857" container ...
	I1115 11:40:22.601014  750044 machine.go:94] provisionDockerMachine start ...
	I1115 11:40:22.601114  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:22.618232  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:22.618553  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:22.618568  750044 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:40:22.768430  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137857
	
	I1115 11:40:22.768474  750044 ubuntu.go:182] provisioning hostname "pause-137857"
	I1115 11:40:22.768540  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:22.786189  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:22.786540  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:22.786561  750044 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-137857 && echo "pause-137857" | sudo tee /etc/hostname
	I1115 11:40:22.955653  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137857
	
	I1115 11:40:22.955734  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:22.990517  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:22.990828  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:22.990844  750044 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-137857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-137857/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-137857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:40:23.161704  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:40:23.161797  750044 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:40:23.161838  750044 ubuntu.go:190] setting up certificates
	I1115 11:40:23.161861  750044 provision.go:84] configureAuth start
	I1115 11:40:23.161941  750044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-137857
	I1115 11:40:23.186502  750044 provision.go:143] copyHostCerts
	I1115 11:40:23.186568  750044 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:40:23.186583  750044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:40:23.186659  750044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:40:23.186760  750044 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:40:23.186766  750044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:40:23.186794  750044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:40:23.186848  750044 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:40:23.186853  750044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:40:23.186876  750044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:40:23.186921  750044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.pause-137857 san=[127.0.0.1 192.168.85.2 localhost minikube pause-137857]
	I1115 11:40:23.788402  750044 provision.go:177] copyRemoteCerts
	I1115 11:40:23.788493  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:40:23.788552  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:23.806098  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:23.913191  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:40:23.931003  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:40:23.950141  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 11:40:23.968633  750044 provision.go:87] duration metric: took 806.735388ms to configureAuth
	I1115 11:40:23.968661  750044 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:40:23.968914  750044 config.go:182] Loaded profile config "pause-137857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:40:23.969030  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:23.986100  750044 main.go:143] libmachine: Using SSH client type: native
	I1115 11:40:23.986421  750044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33764 <nil> <nil>}
	I1115 11:40:23.986441  750044 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:40:29.334327  750044 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:40:29.334350  750044 machine.go:97] duration metric: took 6.733326599s to provisionDockerMachine
	I1115 11:40:29.334361  750044 start.go:293] postStartSetup for "pause-137857" (driver="docker")
	I1115 11:40:29.334372  750044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:40:29.334445  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:40:29.334496  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.353124  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.458070  750044 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:40:29.462723  750044 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:40:29.462754  750044 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:40:29.462766  750044 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:40:29.462823  750044 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:40:29.462919  750044 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:40:29.463036  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:40:29.471724  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:40:29.496504  750044 start.go:296] duration metric: took 162.127566ms for postStartSetup
	I1115 11:40:29.496598  750044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:40:29.496649  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.516652  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.631339  750044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:40:29.638361  750044 fix.go:56] duration metric: took 7.058928115s for fixHost
	I1115 11:40:29.638384  750044 start.go:83] releasing machines lock for "pause-137857", held for 7.058975491s
	I1115 11:40:29.638452  750044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-137857
	I1115 11:40:29.658129  750044 ssh_runner.go:195] Run: cat /version.json
	I1115 11:40:29.658215  750044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:40:29.658284  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.658296  750044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-137857
	I1115 11:40:29.690186  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.698824  750044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33764 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/pause-137857/id_rsa Username:docker}
	I1115 11:40:29.901603  750044 ssh_runner.go:195] Run: systemctl --version
	I1115 11:40:29.913413  750044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:40:29.982866  750044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:40:30.002376  750044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:40:30.002462  750044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:40:30.013723  750044 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:40:30.013748  750044 start.go:496] detecting cgroup driver to use...
	I1115 11:40:30.013787  750044 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:40:30.013849  750044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:40:30.037942  750044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:40:30.076803  750044 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:40:30.076897  750044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:40:30.123598  750044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:40:30.141756  750044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:40:30.345781  750044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:40:30.476196  750044 docker.go:234] disabling docker service ...
	I1115 11:40:30.476271  750044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:40:30.492043  750044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:40:30.506022  750044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:40:30.647299  750044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:40:30.787802  750044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:40:30.802257  750044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:40:30.818088  750044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:40:30.818169  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.828086  750044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:40:30.828172  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.838145  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.847943  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.858008  750044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:40:30.867305  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.877461  750044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.887099  750044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:40:30.896988  750044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:40:30.905481  750044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:40:30.913871  750044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:40:31.048036  750044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:40:31.245401  750044 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:40:31.245530  750044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:40:31.249775  750044 start.go:564] Will wait 60s for crictl version
	I1115 11:40:31.249869  750044 ssh_runner.go:195] Run: which crictl
	I1115 11:40:31.253524  750044 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:40:31.277149  750044 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:40:31.277272  750044 ssh_runner.go:195] Run: crio --version
	I1115 11:40:31.307071  750044 ssh_runner.go:195] Run: crio --version
	I1115 11:40:31.340938  750044 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:40:29.412911  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:29.413322  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:29.413365  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:29.413429  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:29.439848  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:29.439870  735859 cri.go:89] found id: ""
	I1115 11:40:29.439879  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:29.439940  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:29.443870  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:29.443950  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:29.481762  735859 cri.go:89] found id: ""
	I1115 11:40:29.481787  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.481797  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:29.481803  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:29.481861  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:29.524354  735859 cri.go:89] found id: ""
	I1115 11:40:29.524375  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.524384  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:29.524391  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:29.524451  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:29.565015  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:29.565036  735859 cri.go:89] found id: ""
	I1115 11:40:29.565044  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:29.565102  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:29.568815  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:29.568922  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:29.599959  735859 cri.go:89] found id: ""
	I1115 11:40:29.599983  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.599993  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:29.600005  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:29.600067  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:29.630278  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:29.630301  735859 cri.go:89] found id: ""
	I1115 11:40:29.630309  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:29.630364  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:29.635730  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:29.635803  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:29.682511  735859 cri.go:89] found id: ""
	I1115 11:40:29.682533  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.682542  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:29.682548  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:29.682605  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:29.724651  735859 cri.go:89] found id: ""
	I1115 11:40:29.724676  735859 logs.go:282] 0 containers: []
	W1115 11:40:29.724685  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:29.724694  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:29.724705  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:29.806075  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:29.806096  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:29.806113  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:29.858235  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:29.858271  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:29.935792  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:29.935876  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:29.967571  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:29.967598  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:30.035481  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:30.035524  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:30.119914  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:30.120003  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:30.278357  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:30.278436  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:31.343867  750044 cli_runner.go:164] Run: docker network inspect pause-137857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:40:31.360553  750044 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:40:31.364716  750044 kubeadm.go:884] updating cluster {Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:40:31.364915  750044 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:40:31.364991  750044 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:40:31.397216  750044 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:40:31.397245  750044 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:40:31.397305  750044 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:40:31.429547  750044 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:40:31.429572  750044 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:40:31.429579  750044 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 11:40:31.429687  750044 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-137857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:40:31.429781  750044 ssh_runner.go:195] Run: crio config
	I1115 11:40:31.497029  750044 cni.go:84] Creating CNI manager for ""
	I1115 11:40:31.497052  750044 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:40:31.497069  750044 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:40:31.497112  750044 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-137857 NodeName:pause-137857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:40:31.497284  750044 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-137857"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:40:31.497365  750044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:40:31.506287  750044 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:40:31.506368  750044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:40:31.514487  750044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1115 11:40:31.527957  750044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:40:31.541904  750044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 11:40:31.555585  750044 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:40:31.559338  750044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:40:31.688678  750044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:40:31.702174  750044 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857 for IP: 192.168.85.2
	I1115 11:40:31.702195  750044 certs.go:195] generating shared ca certs ...
	I1115 11:40:31.702212  750044 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:40:31.702350  750044 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:40:31.702395  750044 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:40:31.702405  750044 certs.go:257] generating profile certs ...
	I1115 11:40:31.702491  750044 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.key
	I1115 11:40:31.702559  750044 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/apiserver.key.430a59de
	I1115 11:40:31.702600  750044 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/proxy-client.key
	I1115 11:40:31.702710  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:40:31.702747  750044 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:40:31.702763  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:40:31.702789  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:40:31.702814  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:40:31.702841  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:40:31.702887  750044 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:40:31.703534  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:40:31.723309  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:40:31.743790  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:40:31.762728  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:40:31.781757  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 11:40:31.799336  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:40:31.817357  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:40:31.835575  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:40:31.853805  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:40:31.871855  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:40:31.889358  750044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:40:31.907436  750044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:40:31.920418  750044 ssh_runner.go:195] Run: openssl version
	I1115 11:40:31.926699  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:40:31.935189  750044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:40:31.939100  750044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:40:31.939168  750044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:40:31.982413  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:40:31.990315  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:40:32.000251  750044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:40:32.006977  750044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:40:32.007093  750044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:40:32.049001  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:40:32.057434  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:40:32.066494  750044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:40:32.070534  750044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:40:32.070638  750044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:40:32.112577  750044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:40:32.120833  750044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:40:32.124916  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:40:32.166455  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:40:32.208556  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:40:32.259331  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:40:32.307517  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:40:32.356011  750044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:40:32.408880  750044 kubeadm.go:401] StartCluster: {Name:pause-137857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-137857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:40:32.409008  750044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:40:32.409095  750044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:40:32.509009  750044 cri.go:89] found id: "11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538"
	I1115 11:40:32.509076  750044 cri.go:89] found id: "dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991"
	I1115 11:40:32.509095  750044 cri.go:89] found id: "f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2"
	I1115 11:40:32.509113  750044 cri.go:89] found id: "fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd"
	I1115 11:40:32.509131  750044 cri.go:89] found id: "6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a"
	I1115 11:40:32.509148  750044 cri.go:89] found id: "94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66"
	I1115 11:40:32.509164  750044 cri.go:89] found id: "2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	I1115 11:40:32.509181  750044 cri.go:89] found id: "de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	I1115 11:40:32.509210  750044 cri.go:89] found id: ""
	I1115 11:40:32.509276  750044 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:40:32.527195  750044 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:40:32Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:40:32.527334  750044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:40:32.543579  750044 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:40:32.543647  750044 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:40:32.543715  750044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:40:32.565296  750044 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:40:32.565955  750044 kubeconfig.go:125] found "pause-137857" server: "https://192.168.85.2:8443"
	I1115 11:40:32.566826  750044 kapi.go:59] client config for pause-137857: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:40:32.567419  750044 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 11:40:32.567497  750044 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 11:40:32.567518  750044 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 11:40:32.567538  750044 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 11:40:32.567559  750044 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 11:40:32.567878  750044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:40:32.590251  750044 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 11:40:32.590326  750044 kubeadm.go:602] duration metric: took 46.657198ms to restartPrimaryControlPlane
	I1115 11:40:32.590349  750044 kubeadm.go:403] duration metric: took 181.496048ms to StartCluster
	I1115 11:40:32.590380  750044 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:40:32.590457  750044 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:40:32.591331  750044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:40:32.591621  750044 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:40:32.592061  750044 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:40:32.592243  750044 config.go:182] Loaded profile config "pause-137857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:40:32.595350  750044 out.go:179] * Enabled addons: 
	I1115 11:40:32.595462  750044 out.go:179] * Verifying Kubernetes components...
	I1115 11:40:32.798274  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:32.798668  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:32.798715  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:32.798772  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:32.853912  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:32.853933  735859 cri.go:89] found id: ""
	I1115 11:40:32.853941  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:32.853999  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:32.858337  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:32.858417  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:32.897146  735859 cri.go:89] found id: ""
	I1115 11:40:32.897172  735859 logs.go:282] 0 containers: []
	W1115 11:40:32.897180  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:32.897186  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:32.897248  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:32.943386  735859 cri.go:89] found id: ""
	I1115 11:40:32.943412  735859 logs.go:282] 0 containers: []
	W1115 11:40:32.943421  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:32.943428  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:32.943492  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:32.981557  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:32.981580  735859 cri.go:89] found id: ""
	I1115 11:40:32.981588  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:32.981641  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:32.987644  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:32.987719  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:33.034215  735859 cri.go:89] found id: ""
	I1115 11:40:33.034241  735859 logs.go:282] 0 containers: []
	W1115 11:40:33.034250  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:33.034257  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:33.034324  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:33.084139  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:33.084163  735859 cri.go:89] found id: ""
	I1115 11:40:33.084174  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:33.084234  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:33.091933  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:33.092006  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:33.138117  735859 cri.go:89] found id: ""
	I1115 11:40:33.138143  735859 logs.go:282] 0 containers: []
	W1115 11:40:33.138152  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:33.138158  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:33.138221  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:33.194769  735859 cri.go:89] found id: ""
	I1115 11:40:33.194797  735859 logs.go:282] 0 containers: []
	W1115 11:40:33.194805  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:33.194814  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:33.194851  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:33.246395  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:33.246422  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:33.317521  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:33.317558  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:33.370857  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:33.370884  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:33.531366  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:33.531449  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:33.556181  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:33.556207  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:33.695772  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:33.695842  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:33.695869  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:33.769231  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:33.769267  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:36.351410  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:36.351780  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 11:40:36.351821  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:36.351873  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:36.406205  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:36.406224  735859 cri.go:89] found id: ""
	I1115 11:40:36.406232  735859 logs.go:282] 1 containers: [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:36.406299  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:36.410401  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:36.410472  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:36.458741  735859 cri.go:89] found id: ""
	I1115 11:40:36.458811  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.458822  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:36.458830  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:36.458923  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:36.498987  735859 cri.go:89] found id: ""
	I1115 11:40:36.499060  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.499082  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:36.499104  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:36.499195  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:36.554202  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:36.554276  735859 cri.go:89] found id: ""
	I1115 11:40:36.554299  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:36.554392  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:36.561534  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:36.561615  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:36.603698  735859 cri.go:89] found id: ""
	I1115 11:40:36.603723  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.603732  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:36.603741  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:36.603797  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:36.641314  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:36.641337  735859 cri.go:89] found id: ""
	I1115 11:40:36.641345  735859 logs.go:282] 1 containers: [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:36.641404  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:36.645615  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:36.645686  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:36.677592  735859 cri.go:89] found id: ""
	I1115 11:40:36.677617  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.677625  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:36.677631  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:36.677691  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:36.723093  735859 cri.go:89] found id: ""
	I1115 11:40:36.723119  735859 logs.go:282] 0 containers: []
	W1115 11:40:36.723129  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:36.723137  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:36.723149  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:36.794752  735859 logs.go:123] Gathering logs for kube-scheduler [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f] ...
	I1115 11:40:36.794787  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:32.610081  750044 addons.go:515] duration metric: took 17.99325ms for enable addons: enabled=[]
	I1115 11:40:32.610255  750044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:40:32.898825  750044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:40:32.932431  750044 node_ready.go:35] waiting up to 6m0s for node "pause-137857" to be "Ready" ...
	I1115 11:40:36.891493  735859 logs.go:123] Gathering logs for kube-controller-manager [e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138] ...
	I1115 11:40:36.891572  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:36.930771  735859 logs.go:123] Gathering logs for CRI-O ...
	I1115 11:40:36.930848  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 11:40:37.016083  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:37.018564  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:37.099675  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:37.099755  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:37.245232  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:37.245265  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:37.273257  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:37.273282  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:37.396754  735859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 11:40:39.896986  735859 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:40:37.791981  750044 node_ready.go:49] node "pause-137857" is "Ready"
	I1115 11:40:37.792015  750044 node_ready.go:38] duration metric: took 4.859544167s for node "pause-137857" to be "Ready" ...
	I1115 11:40:37.792031  750044 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:40:37.792094  750044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:40:37.810784  750044 api_server.go:72] duration metric: took 5.219096392s to wait for apiserver process to appear ...
	I1115 11:40:37.810811  750044 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:40:37.810831  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:37.855328  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 11:40:37.855358  750044 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 11:40:38.310892  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:38.321182  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:40:38.321209  750044 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:40:38.811366  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:38.819661  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:40:38.819693  750044 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:40:39.310946  750044 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:40:39.320228  750044 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 11:40:39.321646  750044 api_server.go:141] control plane version: v1.34.1
	I1115 11:40:39.321687  750044 api_server.go:131] duration metric: took 1.510867615s to wait for apiserver health ...
	I1115 11:40:39.321701  750044 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:40:39.325639  750044 system_pods.go:59] 7 kube-system pods found
	I1115 11:40:39.325685  750044 system_pods.go:61] "coredns-66bc5c9577-frrt2" [1267fcdc-111d-4540-bc10-4db6499c760a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:40:39.325700  750044 system_pods.go:61] "etcd-pause-137857" [7ed09d18-cbdf-4bd4-92f6-a794c81510a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:40:39.325706  750044 system_pods.go:61] "kindnet-gtpl9" [a93dc784-4bb8-4091-b97d-54dbd2773c1a] Running
	I1115 11:40:39.325714  750044 system_pods.go:61] "kube-apiserver-pause-137857" [f0fd7683-99e7-475a-a2c8-f0ac268f10a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:40:39.325724  750044 system_pods.go:61] "kube-controller-manager-pause-137857" [93aaba4f-8401-43b8-b65c-9bee4d6b801f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:40:39.325732  750044 system_pods.go:61] "kube-proxy-pfg9h" [669bdfff-ffd7-414a-8459-f937c2fa2162] Running
	I1115 11:40:39.325750  750044 system_pods.go:61] "kube-scheduler-pause-137857" [b2cdbf76-ec95-436f-990d-1434ac98d7be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:40:39.325762  750044 system_pods.go:74] duration metric: took 4.053554ms to wait for pod list to return data ...
	I1115 11:40:39.325771  750044 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:40:39.328322  750044 default_sa.go:45] found service account: "default"
	I1115 11:40:39.328385  750044 default_sa.go:55] duration metric: took 2.592538ms for default service account to be created ...
	I1115 11:40:39.328408  750044 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:40:39.331675  750044 system_pods.go:86] 7 kube-system pods found
	I1115 11:40:39.331742  750044 system_pods.go:89] "coredns-66bc5c9577-frrt2" [1267fcdc-111d-4540-bc10-4db6499c760a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:40:39.331778  750044 system_pods.go:89] "etcd-pause-137857" [7ed09d18-cbdf-4bd4-92f6-a794c81510a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:40:39.331798  750044 system_pods.go:89] "kindnet-gtpl9" [a93dc784-4bb8-4091-b97d-54dbd2773c1a] Running
	I1115 11:40:39.331826  750044 system_pods.go:89] "kube-apiserver-pause-137857" [f0fd7683-99e7-475a-a2c8-f0ac268f10a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:40:39.331857  750044 system_pods.go:89] "kube-controller-manager-pause-137857" [93aaba4f-8401-43b8-b65c-9bee4d6b801f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:40:39.331877  750044 system_pods.go:89] "kube-proxy-pfg9h" [669bdfff-ffd7-414a-8459-f937c2fa2162] Running
	I1115 11:40:39.331898  750044 system_pods.go:89] "kube-scheduler-pause-137857" [b2cdbf76-ec95-436f-990d-1434ac98d7be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:40:39.331935  750044 system_pods.go:126] duration metric: took 3.507761ms to wait for k8s-apps to be running ...
	I1115 11:40:39.331957  750044 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:40:39.332035  750044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:40:39.348716  750044 system_svc.go:56] duration metric: took 16.747218ms WaitForService to wait for kubelet
	I1115 11:40:39.348791  750044 kubeadm.go:587] duration metric: took 6.75710933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:40:39.348826  750044 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:40:39.351657  750044 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:40:39.351740  750044 node_conditions.go:123] node cpu capacity is 2
	I1115 11:40:39.351777  750044 node_conditions.go:105] duration metric: took 2.928319ms to run NodePressure ...
	I1115 11:40:39.351816  750044 start.go:242] waiting for startup goroutines ...
	I1115 11:40:39.351838  750044 start.go:247] waiting for cluster config update ...
	I1115 11:40:39.351866  750044 start.go:256] writing updated cluster config ...
	I1115 11:40:39.352248  750044 ssh_runner.go:195] Run: rm -f paused
	I1115 11:40:39.360249  750044 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:40:39.361072  750044 kapi.go:59] client config for pause-137857: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/profiles/pause-137857/client.key", CAFile:"/home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 11:40:39.364487  750044 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-frrt2" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:40:41.370289  750044 pod_ready.go:104] pod "coredns-66bc5c9577-frrt2" is not "Ready", error: <nil>
	I1115 11:40:44.897322  735859 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1115 11:40:44.897396  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 11:40:44.897492  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 11:40:44.923250  735859 cri.go:89] found id: "1c83146f160646dec3ff9e163a428a9fa09ea338505f0dd5e3d2ced2d8113b55"
	I1115 11:40:44.923274  735859 cri.go:89] found id: "ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:44.923279  735859 cri.go:89] found id: ""
	I1115 11:40:44.923286  735859 logs.go:282] 2 containers: [1c83146f160646dec3ff9e163a428a9fa09ea338505f0dd5e3d2ced2d8113b55 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd]
	I1115 11:40:44.923345  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:44.927136  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:44.930718  735859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 11:40:44.930791  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 11:40:44.956772  735859 cri.go:89] found id: ""
	I1115 11:40:44.956797  735859 logs.go:282] 0 containers: []
	W1115 11:40:44.956806  735859 logs.go:284] No container was found matching "etcd"
	I1115 11:40:44.956812  735859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 11:40:44.956902  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 11:40:44.983426  735859 cri.go:89] found id: ""
	I1115 11:40:44.983452  735859 logs.go:282] 0 containers: []
	W1115 11:40:44.983460  735859 logs.go:284] No container was found matching "coredns"
	I1115 11:40:44.983467  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 11:40:44.983536  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 11:40:45.067007  735859 cri.go:89] found id: "aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f"
	I1115 11:40:45.067038  735859 cri.go:89] found id: ""
	I1115 11:40:45.067049  735859 logs.go:282] 1 containers: [aa0d61928d64aaf9d00d3d37d7004314ff8705f8fe850c8e0082db2864bd2e7f]
	I1115 11:40:45.067117  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:45.076555  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 11:40:45.076643  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 11:40:45.169041  735859 cri.go:89] found id: ""
	I1115 11:40:45.169067  735859 logs.go:282] 0 containers: []
	W1115 11:40:45.169076  735859 logs.go:284] No container was found matching "kube-proxy"
	I1115 11:40:45.169083  735859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 11:40:45.169151  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 11:40:45.255539  735859 cri.go:89] found id: "ceaea6bfe285cb036171f548f24930a62493a452213c9dce8a316086c7fb819b"
	I1115 11:40:45.255641  735859 cri.go:89] found id: "e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138"
	I1115 11:40:45.255682  735859 cri.go:89] found id: ""
	I1115 11:40:45.255725  735859 logs.go:282] 2 containers: [ceaea6bfe285cb036171f548f24930a62493a452213c9dce8a316086c7fb819b e5cb669b93e09b246cbe4a542528fc8a19c6dda81d452259c443902b00582138]
	I1115 11:40:45.255890  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:45.267844  735859 ssh_runner.go:195] Run: which crictl
	I1115 11:40:45.280083  735859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 11:40:45.280177  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 11:40:45.319965  735859 cri.go:89] found id: ""
	I1115 11:40:45.320003  735859 logs.go:282] 0 containers: []
	W1115 11:40:45.320013  735859 logs.go:284] No container was found matching "kindnet"
	I1115 11:40:45.320020  735859 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 11:40:45.320150  735859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 11:40:45.348652  735859 cri.go:89] found id: ""
	I1115 11:40:45.348678  735859 logs.go:282] 0 containers: []
	W1115 11:40:45.348687  735859 logs.go:284] No container was found matching "storage-provisioner"
	I1115 11:40:45.348702  735859 logs.go:123] Gathering logs for kube-apiserver [ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd] ...
	I1115 11:40:45.348714  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ec77c57d88128f7052582246562b28dfde33e99f10ba95f341cb308ffc5f91bd"
	I1115 11:40:45.380190  735859 logs.go:123] Gathering logs for container status ...
	I1115 11:40:45.380346  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 11:40:45.414174  735859 logs.go:123] Gathering logs for kubelet ...
	I1115 11:40:45.414201  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 11:40:45.529962  735859 logs.go:123] Gathering logs for dmesg ...
	I1115 11:40:45.529998  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 11:40:45.547467  735859 logs.go:123] Gathering logs for describe nodes ...
	I1115 11:40:45.547497  735859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 11:40:43.377411  750044 pod_ready.go:104] pod "coredns-66bc5c9577-frrt2" is not "Ready", error: <nil>
	I1115 11:40:44.870720  750044 pod_ready.go:94] pod "coredns-66bc5c9577-frrt2" is "Ready"
	I1115 11:40:44.870752  750044 pod_ready.go:86] duration metric: took 5.506192218s for pod "coredns-66bc5c9577-frrt2" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:44.873496  750044 pod_ready.go:83] waiting for pod "etcd-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:40:46.879307  750044 pod_ready.go:104] pod "etcd-pause-137857" is not "Ready", error: <nil>
	W1115 11:40:49.379719  750044 pod_ready.go:104] pod "etcd-pause-137857" is not "Ready", error: <nil>
	I1115 11:40:50.379499  750044 pod_ready.go:94] pod "etcd-pause-137857" is "Ready"
	I1115 11:40:50.379529  750044 pod_ready.go:86] duration metric: took 5.506007332s for pod "etcd-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.382203  750044 pod_ready.go:83] waiting for pod "kube-apiserver-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.386541  750044 pod_ready.go:94] pod "kube-apiserver-pause-137857" is "Ready"
	I1115 11:40:50.386614  750044 pod_ready.go:86] duration metric: took 4.384765ms for pod "kube-apiserver-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.388987  750044 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.393983  750044 pod_ready.go:94] pod "kube-controller-manager-pause-137857" is "Ready"
	I1115 11:40:50.394007  750044 pod_ready.go:86] duration metric: took 4.99332ms for pod "kube-controller-manager-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.396349  750044 pod_ready.go:83] waiting for pod "kube-proxy-pfg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.577738  750044 pod_ready.go:94] pod "kube-proxy-pfg9h" is "Ready"
	I1115 11:40:50.577765  750044 pod_ready.go:86] duration metric: took 181.391139ms for pod "kube-proxy-pfg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:50.778070  750044 pod_ready.go:83] waiting for pod "kube-scheduler-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:51.177037  750044 pod_ready.go:94] pod "kube-scheduler-pause-137857" is "Ready"
	I1115 11:40:51.177061  750044 pod_ready.go:86] duration metric: took 398.964178ms for pod "kube-scheduler-pause-137857" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:40:51.177074  750044 pod_ready.go:40] duration metric: took 11.816748848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:40:51.238613  750044 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:40:51.241826  750044 out.go:179] * Done! kubectl is now configured to use "pause-137857" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.66914554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.762680663Z" level=info msg="Created container 0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1: kube-system/kube-apiserver-pause-137857/kube-apiserver" id=800b206c-efe4-4cdb-8668-4d15be1dd626 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.763479504Z" level=info msg="Starting container: 0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1" id=f3abd657-9704-41c5-b8a2-b35df5575e55 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.766558076Z" level=info msg="Started container" PID=2388 containerID=0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1 description=kube-system/kube-apiserver-pause-137857/kube-apiserver id=f3abd657-9704-41c5-b8a2-b35df5575e55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9fa07442b56e274d46a78c78d1acd8da703f63d4d81901cede0ad975f94ce77
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.78821757Z" level=info msg="Created container 058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1: kube-system/etcd-pause-137857/etcd" id=32eb41a0-e90e-4e7f-b135-43e3eb32e0cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.789641778Z" level=info msg="Starting container: 058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1" id=e3715dc6-011b-4368-881f-0073e90e0b4a name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.789769336Z" level=info msg="Created container a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e: kube-system/kube-controller-manager-pause-137857/kube-controller-manager" id=2a825e93-326c-436e-a4ab-52869861567f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.790631973Z" level=info msg="Starting container: a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e" id=ce09ac2c-adce-4521-a4f2-edd09b6cac48 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.791880475Z" level=info msg="Started container" PID=2375 containerID=058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1 description=kube-system/etcd-pause-137857/etcd id=e3715dc6-011b-4368-881f-0073e90e0b4a name=/runtime.v1.RuntimeService/StartContainer sandboxID=943f4c184f8611ec94708c705265ddeb21a6d4ca00808c7dc65092c1cd983a99
	Nov 15 11:40:32 pause-137857 crio[2062]: time="2025-11-15T11:40:32.796026009Z" level=info msg="Started container" PID=2382 containerID=a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e description=kube-system/kube-controller-manager-pause-137857/kube-controller-manager id=ce09ac2c-adce-4521-a4f2-edd09b6cac48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6fafba999cfae38323e1f391d1d26ff8cce13fbc06e61ab15e62e7589519eb7a
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.925586945Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.930514065Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.930552728Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.930575366Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.93406542Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.934107037Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.934128272Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.939649284Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.939688767Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.939710503Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.944609594Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.944648347Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.944671658Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.951349947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:40:42 pause-137857 crio[2062]: time="2025-11-15T11:40:42.951389299Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0edaa841d5b54       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   b9fa07442b56e       kube-apiserver-pause-137857            kube-system
	058db932812e6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   943f4c184f861       etcd-pause-137857                      kube-system
	a2e23ebc9fd1b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   6fafba999cfae       kube-controller-manager-pause-137857   kube-system
	14380df9df23f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   7f68f02aedddb       kube-scheduler-pause-137857            kube-system
	86ddf29301be3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   f4b2aead6dd93       coredns-66bc5c9577-frrt2               kube-system
	11fc878711b4b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   2de5d237b7269       kube-proxy-pfg9h                       kube-system
	753a589caf043       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   d50493133689a       kindnet-gtpl9                          kube-system
	dd0616de4773a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   f4b2aead6dd93       coredns-66bc5c9577-frrt2               kube-system
	f987d39fb8e95       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   2de5d237b7269       kube-proxy-pfg9h                       kube-system
	fdd5538b2f7f7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   d50493133689a       kindnet-gtpl9                          kube-system
	6acd4d6f33ed4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   7f68f02aedddb       kube-scheduler-pause-137857            kube-system
	94bac5dfed4e1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   6fafba999cfae       kube-controller-manager-pause-137857   kube-system
	2d26f0dee211f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b9fa07442b56e       kube-apiserver-pause-137857            kube-system
	de91188330d1a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   943f4c184f861       etcd-pause-137857                      kube-system
	
	
	==> coredns [86ddf29301be37859289c1c5f546685bc84187eeffd2b7f42158ec98d7a8b59f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59674 - 15543 "HINFO IN 6713335303970030121.2701297401939487917. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01368394s
	
	
	==> coredns [dd0616de4773a796a23dd40e00be0c3d01316f7f2591993fd06c018d4a4aa991] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50912 - 14507 "HINFO IN 4710523890366550557.2551221312603868006. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026977444s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-137857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-137857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=pause-137857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_39_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:39:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-137857
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:40:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:39:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:39:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:39:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:40:19 +0000   Sat, 15 Nov 2025 11:40:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-137857
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                5b75f958-fccb-41b6-88bf-5a1b0ef1e957
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-frrt2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-137857                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-gtpl9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-137857             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-137857    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-pfg9h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-137857             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientPID     93s (x8 over 93s)  kubelet          Node pause-137857 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 93s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  93s (x8 over 93s)  kubelet          Node pause-137857 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    93s (x8 over 93s)  kubelet          Node pause-137857 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 93s                kubelet          Starting kubelet.
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-137857 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-137857 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-137857 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-137857 event: Registered Node pause-137857 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-137857 status is now: NodeReady
	  Normal   RegisteredNode           16s                node-controller  Node pause-137857 event: Registered Node pause-137857 in Controller
	
	
	==> dmesg <==
	[Nov15 11:08] overlayfs: idmapped layers are currently not supported
	[Nov15 11:09] overlayfs: idmapped layers are currently not supported
	[Nov15 11:10] overlayfs: idmapped layers are currently not supported
	[  +3.526164] overlayfs: idmapped layers are currently not supported
	[Nov15 11:12] overlayfs: idmapped layers are currently not supported
	[Nov15 11:16] overlayfs: idmapped layers are currently not supported
	[Nov15 11:18] overlayfs: idmapped layers are currently not supported
	[Nov15 11:22] overlayfs: idmapped layers are currently not supported
	[Nov15 11:23] overlayfs: idmapped layers are currently not supported
	[Nov15 11:24] overlayfs: idmapped layers are currently not supported
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [058db932812e69f341abcac250349bd6e8c187dafd11cc56fcda36d8609e59e1] <==
	{"level":"warn","ts":"2025-11-15T11:40:35.752701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.772891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.800886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.836893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.845603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.857437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.879506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.898353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.911393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.933345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.957019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.966637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:35.990010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.010938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.023164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.047386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.063581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.077726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.141439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.163760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.175397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.208430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.241318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.256978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:40:36.309085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	
	
	==> etcd [de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d] <==
	{"level":"warn","ts":"2025-11-15T11:39:28.412391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.462892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.484286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.517834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.549969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:39:28.720221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32904","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T11:39:38.450153Z","caller":"traceutil/trace.go:172","msg":"trace[1373993601] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"116.714311ms","start":"2025-11-15T11:39:38.333423Z","end":"2025-11-15T11:39:38.450137Z","steps":["trace[1373993601] 'process raft request'  (duration: 84.375694ms)","trace[1373993601] 'compare'  (duration: 31.858121ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T11:40:24.163402Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T11:40:24.163468Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-137857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-15T11:40:24.163569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T11:40:24.306963Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T11:40:24.307063Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T11:40:24.307087Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-15T11:40:24.307173Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-15T11:40:24.307200Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307257Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307326Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T11:40:24.307361Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307428Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T11:40:24.307445Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T11:40:24.307453Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T11:40:24.310569Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-15T11:40:24.310648Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T11:40:24.310716Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:40:24.310741Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-137857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:40:57 up  3:23,  0 user,  load average: 2.17, 3.05, 2.46
	Linux pause-137857 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [753a589caf043fd7414736e947ca13435428a97c154d59c7685ee4e40b4cb298] <==
	I1115 11:40:32.619191       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:40:32.624283       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:40:32.624436       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:40:32.624449       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:40:32.624461       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:40:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1115 11:40:32.928322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 11:40:32.928729       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:40:32.928740       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:40:32.928749       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:40:32.929074       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:40:32.929198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:40:32.929278       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:40:32.929602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 11:40:38.030478       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:40:38.030603       1 metrics.go:72] Registering metrics
	I1115 11:40:38.030695       1 controller.go:711] "Syncing nftables rules"
	I1115 11:40:42.925208       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:40:42.925257       1 main.go:301] handling current node
	I1115 11:40:52.927003       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:40:52.927049       1 main.go:301] handling current node
	
	
	==> kindnet [fdd5538b2f7f7e62b59f154b2a363d4687c38db318c41a946f26501d7164d4dd] <==
	I1115 11:39:39.299419       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:39:39.299851       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:39:39.300014       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:39:39.300056       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:39:39.300098       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:39:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:39:39.498309       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:39:39.498337       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:39:39.498347       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:39:39.498451       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:40:09.499243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:40:09.499249       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:40:09.499355       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:40:09.499500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 11:40:10.898526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:40:10.898577       1 metrics.go:72] Registering metrics
	I1115 11:40:10.898654       1 controller.go:711] "Syncing nftables rules"
	I1115 11:40:19.505118       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:40:19.505181       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0edaa841d5b54f4378cd6d83469319e8ac4f8aac30757c315abf3dbec49fc8d1] <==
	I1115 11:40:37.886800       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:40:37.899800       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:40:37.905544       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 11:40:37.905638       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:40:37.905840       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:40:37.905907       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:40:37.911114       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 11:40:37.911145       1 policy_source.go:240] refreshing policies
	I1115 11:40:37.920686       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:40:37.920763       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:40:37.920794       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:40:37.920823       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:40:37.942140       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 11:40:37.942381       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:40:37.942437       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:40:37.947764       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:40:37.947989       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 11:40:37.960417       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:40:37.970419       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:40:38.582326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:40:39.665633       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:40:41.066685       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:40:41.264346       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:40:41.313278       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:40:41.465890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f] <==
	W1115 11:40:24.179668       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179710       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179754       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179797       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.179879       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.181888       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.181967       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182134       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182188       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182693       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182772       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182832       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.182886       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183121       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183161       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183205       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183239       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183272       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183827       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183880       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.183934       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184219       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184270       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184327       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1115 11:40:24.184366       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [94bac5dfed4e1eb49a8b8809a81cb583d530dd957d56e7afb6dae60ae4e02b66] <==
	I1115 11:39:37.514043       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:39:37.522038       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:39:37.511805       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 11:39:37.528428       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:39:37.528615       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-137857"
	I1115 11:39:37.528717       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:39:37.528787       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:39:37.528831       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:39:37.511816       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:39:37.535234       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 11:39:37.542645       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:39:37.535381       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:39:37.535495       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 11:39:37.535935       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-137857" podCIDRs=["10.244.0.0/24"]
	I1115 11:39:37.535484       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:39:37.549071       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:39:37.560252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:39:37.560275       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:39:37.560285       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:39:37.565370       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:39:37.565685       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:39:37.565836       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:39:37.567136       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:39:37.573731       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:40:22.535056       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a2e23ebc9fd1b0ca7799e0345fcc1c875b47bc77bd022f34805c8090a4fe0f0e] <==
	I1115 11:40:41.066874       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:40:41.066960       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:40:41.069683       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:40:41.072703       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:40:41.073052       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 11:40:41.076937       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:40:41.080161       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 11:40:41.086534       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 11:40:41.086613       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 11:40:41.086660       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 11:40:41.086677       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 11:40:41.086684       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 11:40:41.089870       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:40:41.093425       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 11:40:41.106464       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:40:41.106464       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 11:40:41.106589       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:40:41.107581       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:40:41.107637       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 11:40:41.107669       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 11:40:41.111023       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:40:41.112469       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:40:41.134791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:40:41.134819       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:40:41.134836       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [11fc878711b4b05161fecbabbccacaac0a3ea8614883fb13f4fdb0e5aa15a538] <==
	I1115 11:40:32.636973       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:40:33.800887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:40:37.954365       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:40:37.974163       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:40:37.974332       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:40:38.215526       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:40:38.215592       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:40:38.288962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:40:38.294296       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:40:38.344894       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:40:38.406305       1 config.go:200] "Starting service config controller"
	I1115 11:40:38.407632       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:40:38.407767       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:40:38.407798       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:40:38.407834       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:40:38.407862       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:40:38.408532       1 config.go:309] "Starting node config controller"
	I1115 11:40:38.415029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:40:38.415127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:40:38.507885       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:40:38.508843       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:40:38.513916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [f987d39fb8e9536febaac7a736e61e364b97c1cde64982f9af503c04295401e2] <==
	I1115 11:39:40.389442       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:39:40.482440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:39:40.582892       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:39:40.583026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:39:40.583161       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:39:40.602940       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:39:40.603060       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:39:40.607333       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:39:40.607708       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:39:40.608115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:39:40.611423       1 config.go:200] "Starting service config controller"
	I1115 11:39:40.611501       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:39:40.611539       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:39:40.611577       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:39:40.611609       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:39:40.611634       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:39:40.612598       1 config.go:309] "Starting node config controller"
	I1115 11:39:40.612690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:39:40.612723       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:39:40.712383       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:39:40.712480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:39:40.712499       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [14380df9df23f9d41205f28106bd8a47807ea891d5c0d8a8f437a06ab753b04c] <==
	I1115 11:40:35.004363       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:40:38.585328       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 11:40:38.585429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:40:38.592434       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:40:38.592521       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 11:40:38.592561       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 11:40:38.592590       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:40:38.609611       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:38.609643       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:38.609664       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:40:38.609670       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:40:38.692679       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 11:40:38.710138       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:40:38.710265       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6acd4d6f33ed454af357db0198a45dfe3418d3e9027f6741e2204f23bbd28f6a] <==
	E1115 11:39:30.889272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:39:30.897954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:39:30.898121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:39:30.898237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:39:30.898278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:39:30.898362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:39:30.898404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:39:30.898457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:39:30.898519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:39:30.898538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:39:30.898632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:39:30.898675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:39:30.898742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:39:30.898749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:39:30.898789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:39:30.898898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:39:30.898887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:39:30.898950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1115 11:39:32.080460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:24.169343       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 11:40:24.169374       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 11:40:24.169398       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 11:40:24.169426       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:40:24.169635       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 11:40:24.169652       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.533841    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfg9h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.533977    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-frrt2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1267fcdc-111d-4540-bc10-4db6499c760a" pod="kube-system/coredns-66bc5c9577-frrt2"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.534111    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6ce87924d4e6aec5abfbf3b1f82d6cde" pod="kube-system/etcd-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: I1115 11:40:32.596832    1307 scope.go:117] "RemoveContainer" containerID="de91188330d1a20583f7966a076883bee5455862d604125627d4c3041168253d"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.597820    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6ce87924d4e6aec5abfbf3b1f82d6cde" pod="kube-system/etcd-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598020    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97ccbb7cf4e8e6e0045f2479434e619b" pod="kube-system/kube-apiserver-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598178    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a843aecd7cdae402f31837f9ba53da77" pod="kube-system/kube-controller-manager-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598314    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d197660a217ec3c231e642bc19a69329" pod="kube-system/kube-scheduler-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598452    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gtpl9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a93dc784-4bb8-4091-b97d-54dbd2773c1a" pod="kube-system/kindnet-gtpl9"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598585    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfg9h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.598718    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-frrt2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1267fcdc-111d-4540-bc10-4db6499c760a" pod="kube-system/coredns-66bc5c9577-frrt2"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: I1115 11:40:32.601722    1307 scope.go:117] "RemoveContainer" containerID="2d26f0dee211fd9e4cf2cd430bd4cd091ee46599dc64ce64ac55aef62ac2077f"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.602332    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-frrt2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1267fcdc-111d-4540-bc10-4db6499c760a" pod="kube-system/coredns-66bc5c9577-frrt2"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.602612    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6ce87924d4e6aec5abfbf3b1f82d6cde" pod="kube-system/etcd-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.602867    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97ccbb7cf4e8e6e0045f2479434e619b" pod="kube-system/kube-apiserver-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603122    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a843aecd7cdae402f31837f9ba53da77" pod="kube-system/kube-controller-manager-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603367    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-137857\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d197660a217ec3c231e642bc19a69329" pod="kube-system/kube-scheduler-pause-137857"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603606    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gtpl9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a93dc784-4bb8-4091-b97d-54dbd2773c1a" pod="kube-system/kindnet-gtpl9"
	Nov 15 11:40:32 pause-137857 kubelet[1307]: E1115 11:40:32.603849    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfg9h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:37 pause-137857 kubelet[1307]: E1115 11:40:37.649912    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-pfg9h\" is forbidden: User \"system:node:pause-137857\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-137857' and this object" podUID="669bdfff-ffd7-414a-8459-f937c2fa2162" pod="kube-system/kube-proxy-pfg9h"
	Nov 15 11:40:37 pause-137857 kubelet[1307]: E1115 11:40:37.650609    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-137857\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-137857' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 15 11:40:43 pause-137857 kubelet[1307]: W1115 11:40:43.477802    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 11:40:51 pause-137857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:40:51 pause-137857 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:40:51 pause-137857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-137857 -n pause-137857
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-137857 -n pause-137857: exit status 2 (371.454186ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-137857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.008278ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:44:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-872969 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-872969 describe deploy/metrics-server -n kube-system: exit status 1 (81.27323ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-872969 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-872969
helpers_test.go:243: (dbg) docker inspect old-k8s-version-872969:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80",
	        "Created": "2025-11-15T11:43:29.514556564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 767408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:43:29.583183755Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/hostname",
	        "HostsPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/hosts",
	        "LogPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80-json.log",
	        "Name": "/old-k8s-version-872969",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-872969:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-872969",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80",
	                "LowerDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-872969",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-872969/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-872969",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-872969",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-872969",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "54233e1e821b8e1e35b05e52bcf61a2f075fbf889d2aefc272586eee92d902cb",
	            "SandboxKey": "/var/run/docker/netns/54233e1e821b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-872969": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:ae:3b:d7:34:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fe74aaea9f1ff898d8b3c6c329ef26fb68a67a4e5377e568964777357f485456",
	                    "EndpointID": "3458dfa2f32721a8ba6c503df9bb8b024f9c165b411b842c3d85d1c8e7da0fee",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-872969",
	                        "661ed5bad40f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-872969 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-872969 logs -n 25: (1.24467716s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-949287 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo containerd config dump                                                                                                                                                                                                  │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo crio config                                                                                                                                                                                                             │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ delete  │ -p cilium-949287                                                                                                                                                                                                                              │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:41 UTC │
	│ start   │ -p force-systemd-env-386707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-386707  │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:42 UTC │
	│ delete  │ -p kubernetes-upgrade-436490                                                                                                                                                                                                                  │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-636406    │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p force-systemd-env-386707                                                                                                                                                                                                                   │ force-systemd-env-386707  │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-options-303284 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ cert-options-303284 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ -p cert-options-303284 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p cert-options-303284                                                                                                                                                                                                                        │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:43:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:43:23.612001  767019 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:43:23.612199  767019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:43:23.612230  767019 out.go:374] Setting ErrFile to fd 2...
	I1115 11:43:23.612250  767019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:43:23.612517  767019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:43:23.613025  767019 out.go:368] Setting JSON to false
	I1115 11:43:23.613939  767019 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12355,"bootTime":1763194649,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:43:23.614037  767019 start.go:143] virtualization:  
	I1115 11:43:23.617665  767019 out.go:179] * [old-k8s-version-872969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:43:23.622313  767019 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:43:23.622382  767019 notify.go:221] Checking for updates...
	I1115 11:43:23.629519  767019 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:43:23.632794  767019 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:43:23.636146  767019 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:43:23.639244  767019 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:43:23.642357  767019 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:43:23.645773  767019 config.go:182] Loaded profile config "cert-expiration-636406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:43:23.645886  767019 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:43:23.674813  767019 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:43:23.674942  767019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:43:23.738678  767019 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:43:23.729399188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:43:23.738789  767019 docker.go:319] overlay module found
	I1115 11:43:23.742098  767019 out.go:179] * Using the docker driver based on user configuration
	I1115 11:43:23.745077  767019 start.go:309] selected driver: docker
	I1115 11:43:23.745099  767019 start.go:930] validating driver "docker" against <nil>
	I1115 11:43:23.745127  767019 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:43:23.745890  767019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:43:23.803279  767019 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:43:23.792834965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:43:23.803434  767019 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 11:43:23.803699  767019 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:43:23.806775  767019 out.go:179] * Using Docker driver with root privileges
	I1115 11:43:23.809610  767019 cni.go:84] Creating CNI manager for ""
	I1115 11:43:23.809675  767019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:43:23.809686  767019 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:43:23.809764  767019 start.go:353] cluster config:
	{Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:43:23.812927  767019 out.go:179] * Starting "old-k8s-version-872969" primary control-plane node in "old-k8s-version-872969" cluster
	I1115 11:43:23.815788  767019 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:43:23.818640  767019 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:43:23.821578  767019 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 11:43:23.821636  767019 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 11:43:23.821648  767019 cache.go:65] Caching tarball of preloaded images
	I1115 11:43:23.821658  767019 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:43:23.821731  767019 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:43:23.821741  767019 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 11:43:23.821840  767019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/config.json ...
	I1115 11:43:23.821857  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/config.json: {Name:mk1811f3e5121ffc69083aaefc7cc2ef2a248c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:23.841260  767019 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:43:23.841285  767019 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:43:23.841307  767019 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:43:23.841330  767019 start.go:360] acquireMachinesLock for old-k8s-version-872969: {Name:mk8e7def530b80cef5a2809f08776681cf0304db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:43:23.841441  767019 start.go:364] duration metric: took 91.161µs to acquireMachinesLock for "old-k8s-version-872969"
	I1115 11:43:23.841471  767019 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:43:23.841545  767019 start.go:125] createHost starting for "" (driver="docker")
	I1115 11:43:23.845042  767019 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:43:23.845351  767019 start.go:159] libmachine.API.Create for "old-k8s-version-872969" (driver="docker")
	I1115 11:43:23.845413  767019 client.go:173] LocalClient.Create starting
	I1115 11:43:23.845479  767019 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:43:23.845871  767019 main.go:143] libmachine: Decoding PEM data...
	I1115 11:43:23.845898  767019 main.go:143] libmachine: Parsing certificate...
	I1115 11:43:23.845963  767019 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:43:23.845993  767019 main.go:143] libmachine: Decoding PEM data...
	I1115 11:43:23.846007  767019 main.go:143] libmachine: Parsing certificate...
	I1115 11:43:23.846412  767019 cli_runner.go:164] Run: docker network inspect old-k8s-version-872969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:43:23.862338  767019 cli_runner.go:211] docker network inspect old-k8s-version-872969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:43:23.862429  767019 network_create.go:284] running [docker network inspect old-k8s-version-872969] to gather additional debugging logs...
	I1115 11:43:23.862451  767019 cli_runner.go:164] Run: docker network inspect old-k8s-version-872969
	W1115 11:43:23.881360  767019 cli_runner.go:211] docker network inspect old-k8s-version-872969 returned with exit code 1
	I1115 11:43:23.881388  767019 network_create.go:287] error running [docker network inspect old-k8s-version-872969]: docker network inspect old-k8s-version-872969: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-872969 not found
	I1115 11:43:23.881402  767019 network_create.go:289] output of [docker network inspect old-k8s-version-872969]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-872969 not found
	
	** /stderr **
	I1115 11:43:23.881530  767019 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:43:23.898110  767019 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:43:23.898454  767019 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:43:23.898809  767019 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:43:23.899043  767019 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9c71d89a60cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:ba:5c:b8:a5:71} reservation:<nil>}
	I1115 11:43:23.899484  767019 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a243b0}
	I1115 11:43:23.899506  767019 network_create.go:124] attempt to create docker network old-k8s-version-872969 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 11:43:23.899567  767019 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-872969 old-k8s-version-872969
	I1115 11:43:23.965194  767019 network_create.go:108] docker network old-k8s-version-872969 192.168.85.0/24 created
	I1115 11:43:23.965244  767019 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-872969" container
	I1115 11:43:23.965325  767019 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:43:23.982036  767019 cli_runner.go:164] Run: docker volume create old-k8s-version-872969 --label name.minikube.sigs.k8s.io=old-k8s-version-872969 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:43:24.006976  767019 oci.go:103] Successfully created a docker volume old-k8s-version-872969
	I1115 11:43:24.007166  767019 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-872969-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-872969 --entrypoint /usr/bin/test -v old-k8s-version-872969:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:43:24.537955  767019 oci.go:107] Successfully prepared a docker volume old-k8s-version-872969
	I1115 11:43:24.538020  767019 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 11:43:24.538034  767019 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 11:43:24.538102  767019 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-872969:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 11:43:29.445777  767019 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-872969:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.907625085s)
	I1115 11:43:29.445810  767019 kic.go:203] duration metric: took 4.907772911s to extract preloaded images to volume ...
	W1115 11:43:29.445944  767019 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:43:29.446061  767019 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:43:29.500426  767019 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-872969 --name old-k8s-version-872969 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-872969 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-872969 --network old-k8s-version-872969 --ip 192.168.85.2 --volume old-k8s-version-872969:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:43:29.806627  767019 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Running}}
	I1115 11:43:29.827429  767019 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:43:29.848770  767019 cli_runner.go:164] Run: docker exec old-k8s-version-872969 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:43:29.903686  767019 oci.go:144] the created container "old-k8s-version-872969" has a running status.
	I1115 11:43:29.903716  767019 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa...
	I1115 11:43:30.299263  767019 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:43:30.324188  767019 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:43:30.351321  767019 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:43:30.351344  767019 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-872969 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:43:30.423661  767019 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:43:30.454253  767019 machine.go:94] provisionDockerMachine start ...
	I1115 11:43:30.454362  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:30.487423  767019 main.go:143] libmachine: Using SSH client type: native
	I1115 11:43:30.487745  767019 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33789 <nil> <nil>}
	I1115 11:43:30.487765  767019 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:43:30.488413  767019 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49864->127.0.0.1:33789: read: connection reset by peer
	I1115 11:43:33.640642  767019 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-872969
	
	I1115 11:43:33.640664  767019 ubuntu.go:182] provisioning hostname "old-k8s-version-872969"
	I1115 11:43:33.640736  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:33.658311  767019 main.go:143] libmachine: Using SSH client type: native
	I1115 11:43:33.658636  767019 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33789 <nil> <nil>}
	I1115 11:43:33.658655  767019 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-872969 && echo "old-k8s-version-872969" | sudo tee /etc/hostname
	I1115 11:43:33.822975  767019 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-872969
	
	I1115 11:43:33.823070  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:33.841990  767019 main.go:143] libmachine: Using SSH client type: native
	I1115 11:43:33.842308  767019 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33789 <nil> <nil>}
	I1115 11:43:33.842330  767019 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-872969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-872969/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-872969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:43:33.993224  767019 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:43:33.993312  767019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:43:33.993369  767019 ubuntu.go:190] setting up certificates
	I1115 11:43:33.993390  767019 provision.go:84] configureAuth start
	I1115 11:43:33.993478  767019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:43:34.017162  767019 provision.go:143] copyHostCerts
	I1115 11:43:34.017301  767019 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:43:34.017316  767019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:43:34.017404  767019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:43:34.017514  767019 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:43:34.017526  767019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:43:34.017553  767019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:43:34.017618  767019 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:43:34.017627  767019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:43:34.017651  767019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:43:34.017714  767019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-872969 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-872969]
	I1115 11:43:34.394133  767019 provision.go:177] copyRemoteCerts
	I1115 11:43:34.394199  767019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:43:34.394249  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:34.412449  767019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:43:34.520695  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:43:34.538353  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 11:43:34.558093  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:43:34.576839  767019 provision.go:87] duration metric: took 583.409336ms to configureAuth
	I1115 11:43:34.576888  767019 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:43:34.577114  767019 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:43:34.577235  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:34.596095  767019 main.go:143] libmachine: Using SSH client type: native
	I1115 11:43:34.596481  767019 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33789 <nil> <nil>}
	I1115 11:43:34.596510  767019 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:43:34.866060  767019 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:43:34.866088  767019 machine.go:97] duration metric: took 4.411812053s to provisionDockerMachine
	I1115 11:43:34.866099  767019 client.go:176] duration metric: took 11.020677391s to LocalClient.Create
	I1115 11:43:34.866111  767019 start.go:167] duration metric: took 11.020763932s to libmachine.API.Create "old-k8s-version-872969"
	I1115 11:43:34.866126  767019 start.go:293] postStartSetup for "old-k8s-version-872969" (driver="docker")
	I1115 11:43:34.866139  767019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:43:34.866215  767019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:43:34.866258  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:34.887712  767019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:43:34.997298  767019 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:43:35.007014  767019 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:43:35.007046  767019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:43:35.007059  767019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:43:35.007142  767019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:43:35.007235  767019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:43:35.007344  767019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:43:35.016661  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:43:35.037263  767019 start.go:296] duration metric: took 171.118457ms for postStartSetup
	I1115 11:43:35.037705  767019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:43:35.055048  767019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/config.json ...
	I1115 11:43:35.055473  767019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:43:35.055575  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:35.077112  767019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:43:35.182019  767019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:43:35.187323  767019 start.go:128] duration metric: took 11.345761417s to createHost
	I1115 11:43:35.187349  767019 start.go:83] releasing machines lock for "old-k8s-version-872969", held for 11.345896s
	I1115 11:43:35.187424  767019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:43:35.208039  767019 ssh_runner.go:195] Run: cat /version.json
	I1115 11:43:35.208059  767019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:43:35.208100  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:35.208127  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:43:35.228207  767019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:43:35.238467  767019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:43:35.333088  767019 ssh_runner.go:195] Run: systemctl --version
	I1115 11:43:35.427619  767019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:43:35.471117  767019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:43:35.475643  767019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:43:35.475719  767019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:43:35.504988  767019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:43:35.505060  767019 start.go:496] detecting cgroup driver to use...
	I1115 11:43:35.505108  767019 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:43:35.505189  767019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:43:35.521537  767019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:43:35.534082  767019 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:43:35.534148  767019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:43:35.552087  767019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:43:35.573751  767019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:43:35.699344  767019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:43:35.831631  767019 docker.go:234] disabling docker service ...
	I1115 11:43:35.831717  767019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:43:35.854227  767019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:43:35.867532  767019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:43:35.982088  767019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:43:36.112265  767019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:43:36.126231  767019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:43:36.141550  767019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 11:43:36.141647  767019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:43:36.150456  767019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:43:36.150551  767019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:43:36.159647  767019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:43:36.168216  767019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:43:36.177072  767019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:43:36.186079  767019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:43:36.194917  767019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:43:36.208754  767019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:43:36.217615  767019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:43:36.225060  767019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:43:36.232370  767019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:43:36.343637  767019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:43:36.468946  767019 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:43:36.469070  767019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:43:36.472837  767019 start.go:564] Will wait 60s for crictl version
	I1115 11:43:36.472986  767019 ssh_runner.go:195] Run: which crictl
	I1115 11:43:36.476613  767019 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:43:36.506263  767019 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:43:36.506405  767019 ssh_runner.go:195] Run: crio --version
	I1115 11:43:36.534568  767019 ssh_runner.go:195] Run: crio --version
	I1115 11:43:36.566994  767019 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 11:43:36.569745  767019 cli_runner.go:164] Run: docker network inspect old-k8s-version-872969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:43:36.585239  767019 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:43:36.589590  767019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:43:36.599190  767019 kubeadm.go:884] updating cluster {Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:43:36.599317  767019 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 11:43:36.599374  767019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:43:36.640475  767019 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:43:36.640501  767019 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:43:36.640555  767019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:43:36.666936  767019 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:43:36.666959  767019 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:43:36.666967  767019 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 11:43:36.667052  767019 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-872969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:43:36.667136  767019 ssh_runner.go:195] Run: crio config
	I1115 11:43:36.724385  767019 cni.go:84] Creating CNI manager for ""
	I1115 11:43:36.724408  767019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:43:36.724423  767019 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:43:36.724445  767019 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-872969 NodeName:old-k8s-version-872969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:43:36.724587  767019 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-872969"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:43:36.724665  767019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 11:43:36.732464  767019 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:43:36.732558  767019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:43:36.740168  767019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1115 11:43:36.753393  767019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:43:36.767890  767019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1115 11:43:36.780941  767019 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:43:36.784590  767019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:43:36.794006  767019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:43:36.908328  767019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:43:36.924070  767019 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969 for IP: 192.168.85.2
	I1115 11:43:36.924147  767019 certs.go:195] generating shared ca certs ...
	I1115 11:43:36.924180  767019 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:36.924375  767019 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:43:36.924465  767019 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:43:36.924491  767019 certs.go:257] generating profile certs ...
	I1115 11:43:36.924597  767019 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.key
	I1115 11:43:36.924638  767019 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt with IP's: []
	I1115 11:43:37.047820  767019 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt ...
	I1115 11:43:37.047852  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: {Name:mke0977fcb2fcbe1fc3a89f58bf9d1dcbf07c348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:37.048084  767019 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.key ...
	I1115 11:43:37.048101  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.key: {Name:mk055edb77475dc68c088f5d8544d332e2b3fa7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:37.048196  767019 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key.5f4bae20
	I1115 11:43:37.048217  767019 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt.5f4bae20 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 11:43:37.409936  767019 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt.5f4bae20 ...
	I1115 11:43:37.409969  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt.5f4bae20: {Name:mkf5443769cce4e0f7e6da14286baecd6e0e067b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:37.410155  767019 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key.5f4bae20 ...
	I1115 11:43:37.410171  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key.5f4bae20: {Name:mk7fd2377941f1466fc56e3c5a2eb29dd3e824c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:37.410257  767019 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt.5f4bae20 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt
	I1115 11:43:37.410344  767019 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key.5f4bae20 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key
	I1115 11:43:37.410407  767019 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key
	I1115 11:43:37.410421  767019 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.crt with IP's: []
	I1115 11:43:37.592903  767019 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.crt ...
	I1115 11:43:37.592931  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.crt: {Name:mk736a00f66f8a4f46dde432d063e184d1f4eeac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:37.593113  767019 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key ...
	I1115 11:43:37.593128  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key: {Name:mke9897d7a2e4a51a5a224e462693432fda17b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:43:37.593310  767019 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:43:37.593352  767019 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:43:37.593361  767019 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:43:37.593385  767019 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:43:37.593412  767019 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:43:37.593436  767019 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:43:37.593484  767019 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:43:37.594113  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:43:37.612961  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:43:37.631084  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:43:37.648510  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:43:37.666407  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:43:37.702725  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:43:37.724421  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:43:37.746393  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:43:37.768657  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:43:37.787888  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:43:37.805877  767019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:43:37.824711  767019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:43:37.838055  767019 ssh_runner.go:195] Run: openssl version
	I1115 11:43:37.844356  767019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:43:37.852839  767019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:43:37.857025  767019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:43:37.857146  767019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:43:37.901516  767019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:43:37.909916  767019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:43:37.917987  767019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:43:37.921680  767019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:43:37.921742  767019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:43:37.963166  767019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:43:37.971627  767019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:43:37.980098  767019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:43:37.983764  767019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:43:37.983876  767019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:43:38.025694  767019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:43:38.035231  767019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:43:38.039421  767019 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:43:38.039496  767019 kubeadm.go:401] StartCluster: {Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:43:38.039580  767019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:43:38.039647  767019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:43:38.070582  767019 cri.go:89] found id: ""
	I1115 11:43:38.070656  767019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:43:38.079693  767019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:43:38.088350  767019 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:43:38.088485  767019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:43:38.097125  767019 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:43:38.097145  767019 kubeadm.go:158] found existing configuration files:
	
	I1115 11:43:38.097199  767019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:43:38.105590  767019 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:43:38.105728  767019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:43:38.113649  767019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:43:38.121563  767019 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:43:38.121676  767019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:43:38.129880  767019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:43:38.137972  767019 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:43:38.138038  767019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:43:38.145937  767019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:43:38.154581  767019 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:43:38.154658  767019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:43:38.162262  767019 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:43:38.248801  767019 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 11:43:38.330967  767019 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 11:43:55.921302  767019 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1115 11:43:55.921361  767019 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:43:55.921452  767019 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:43:55.921515  767019 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:43:55.921551  767019 kubeadm.go:319] OS: Linux
	I1115 11:43:55.921599  767019 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:43:55.921649  767019 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:43:55.921699  767019 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:43:55.921749  767019 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:43:55.921798  767019 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:43:55.921849  767019 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:43:55.921896  767019 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:43:55.921946  767019 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:43:55.921994  767019 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:43:55.922070  767019 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:43:55.922168  767019 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:43:55.922273  767019 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1115 11:43:55.922342  767019 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 11:43:55.926954  767019 out.go:252]   - Generating certificates and keys ...
	I1115 11:43:55.927074  767019 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:43:55.927147  767019 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:43:55.927218  767019 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:43:55.927278  767019 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 11:43:55.927341  767019 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 11:43:55.927394  767019 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:43:55.927450  767019 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:43:55.927586  767019 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-872969] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 11:43:55.927641  767019 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:43:55.927770  767019 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-872969] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 11:43:55.927838  767019 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:43:55.927905  767019 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:43:55.927951  767019 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:43:55.928009  767019 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:43:55.928062  767019 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:43:55.928122  767019 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:43:55.928190  767019 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:43:55.928247  767019 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:43:55.928333  767019 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:43:55.928402  767019 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 11:43:55.933263  767019 out.go:252]   - Booting up control plane ...
	I1115 11:43:55.933402  767019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 11:43:55.933489  767019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 11:43:55.933564  767019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 11:43:55.933679  767019 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 11:43:55.933772  767019 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 11:43:55.933816  767019 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 11:43:55.933994  767019 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1115 11:43:55.934094  767019 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.006975 seconds
	I1115 11:43:55.934214  767019 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:43:55.934354  767019 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:43:55.934420  767019 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:43:55.934633  767019 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-872969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:43:55.934697  767019 kubeadm.go:319] [bootstrap-token] Using token: pcgbbl.x6hiqqoolbm07iuz
	I1115 11:43:55.937588  767019 out.go:252]   - Configuring RBAC rules ...
	I1115 11:43:55.937709  767019 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 11:43:55.937808  767019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 11:43:55.937966  767019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 11:43:55.938128  767019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 11:43:55.938268  767019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 11:43:55.938376  767019 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 11:43:55.938504  767019 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 11:43:55.938555  767019 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 11:43:55.938606  767019 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 11:43:55.938610  767019 kubeadm.go:319] 
	I1115 11:43:55.938683  767019 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 11:43:55.938688  767019 kubeadm.go:319] 
	I1115 11:43:55.938779  767019 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 11:43:55.938784  767019 kubeadm.go:319] 
	I1115 11:43:55.938812  767019 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 11:43:55.938877  767019 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 11:43:55.938933  767019 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 11:43:55.938937  767019 kubeadm.go:319] 
	I1115 11:43:55.938997  767019 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 11:43:55.939003  767019 kubeadm.go:319] 
	I1115 11:43:55.939055  767019 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 11:43:55.939060  767019 kubeadm.go:319] 
	I1115 11:43:55.939117  767019 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 11:43:55.939201  767019 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 11:43:55.939277  767019 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 11:43:55.939281  767019 kubeadm.go:319] 
	I1115 11:43:55.939375  767019 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 11:43:55.939459  767019 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 11:43:55.939464  767019 kubeadm.go:319] 
	I1115 11:43:55.939557  767019 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token pcgbbl.x6hiqqoolbm07iuz \
	I1115 11:43:55.939672  767019 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 11:43:55.939695  767019 kubeadm.go:319] 	--control-plane 
	I1115 11:43:55.939699  767019 kubeadm.go:319] 
	I1115 11:43:55.939794  767019 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 11:43:55.939798  767019 kubeadm.go:319] 
	I1115 11:43:55.939889  767019 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token pcgbbl.x6hiqqoolbm07iuz \
	I1115 11:43:55.940015  767019 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 11:43:55.940024  767019 cni.go:84] Creating CNI manager for ""
	I1115 11:43:55.940031  767019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:43:55.944464  767019 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 11:43:55.947485  767019 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 11:43:55.951683  767019 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1115 11:43:55.951706  767019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 11:43:55.979926  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 11:43:57.168249  767019 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1882875s)
	I1115 11:43:57.168289  767019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 11:43:57.168413  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:43:57.168485  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-872969 minikube.k8s.io/updated_at=2025_11_15T11_43_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=old-k8s-version-872969 minikube.k8s.io/primary=true
	I1115 11:43:57.313987  767019 ops.go:34] apiserver oom_adj: -16
	I1115 11:43:57.314094  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:43:57.814419  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:43:58.314182  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:43:58.814486  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:43:59.314252  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:43:59.814204  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:00.315221  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:00.814287  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:01.315051  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:01.815193  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:02.314239  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:02.815058  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:03.314401  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:03.814649  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:04.314224  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:04.814660  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:05.314185  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:05.814871  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:06.314221  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:06.815094  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:07.314761  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:07.814904  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:08.314996  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:08.815137  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:09.314283  767019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:44:09.421642  767019 kubeadm.go:1114] duration metric: took 12.253269129s to wait for elevateKubeSystemPrivileges
	I1115 11:44:09.421678  767019 kubeadm.go:403] duration metric: took 31.382179291s to StartCluster
	I1115 11:44:09.421704  767019 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:09.421771  767019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:44:09.422712  767019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:09.422941  767019 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:44:09.423039  767019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 11:44:09.423296  767019 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:44:09.423411  767019 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:44:09.423478  767019 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-872969"
	I1115 11:44:09.423494  767019 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-872969"
	I1115 11:44:09.423521  767019 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:09.424209  767019 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:09.424212  767019 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-872969"
	I1115 11:44:09.424244  767019 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-872969"
	I1115 11:44:09.424507  767019 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:09.428738  767019 out.go:179] * Verifying Kubernetes components...
	I1115 11:44:09.432496  767019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:44:09.475600  767019 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-872969"
	I1115 11:44:09.475643  767019 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:09.476089  767019 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:09.479533  767019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:44:09.486401  767019 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:44:09.486425  767019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:44:09.486493  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:09.512462  767019 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:44:09.512482  767019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:44:09.512551  767019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:09.520382  767019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:09.556557  767019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:09.755930  767019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 11:44:09.756123  767019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:44:09.812532  767019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:44:09.878507  767019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:44:10.662910  767019 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 11:44:10.665285  767019 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-872969" to be "Ready" ...
	I1115 11:44:10.960958  767019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.082349565s)
	I1115 11:44:10.961233  767019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.14867764s)
	I1115 11:44:10.974060  767019 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 11:44:10.976940  767019 addons.go:515] duration metric: took 1.553515939s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 11:44:11.169769  767019 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-872969" context rescaled to 1 replicas
	W1115 11:44:12.669078  767019 node_ready.go:57] node "old-k8s-version-872969" has "Ready":"False" status (will retry)
	W1115 11:44:15.168969  767019 node_ready.go:57] node "old-k8s-version-872969" has "Ready":"False" status (will retry)
	W1115 11:44:17.668907  767019 node_ready.go:57] node "old-k8s-version-872969" has "Ready":"False" status (will retry)
	W1115 11:44:20.169032  767019 node_ready.go:57] node "old-k8s-version-872969" has "Ready":"False" status (will retry)
	W1115 11:44:22.668339  767019 node_ready.go:57] node "old-k8s-version-872969" has "Ready":"False" status (will retry)
	I1115 11:44:23.669145  767019 node_ready.go:49] node "old-k8s-version-872969" is "Ready"
	I1115 11:44:23.669176  767019 node_ready.go:38] duration metric: took 13.003862539s for node "old-k8s-version-872969" to be "Ready" ...
	I1115 11:44:23.669190  767019 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:44:23.669249  767019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:44:23.680923  767019 api_server.go:72] duration metric: took 14.257945569s to wait for apiserver process to appear ...
	I1115 11:44:23.680948  767019 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:44:23.680966  767019 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:44:23.690732  767019 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 11:44:23.694235  767019 api_server.go:141] control plane version: v1.28.0
	I1115 11:44:23.694271  767019 api_server.go:131] duration metric: took 13.310693ms to wait for apiserver health ...
	I1115 11:44:23.694281  767019 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:44:23.699132  767019 system_pods.go:59] 8 kube-system pods found
	I1115 11:44:23.699181  767019 system_pods.go:61] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:44:23.699194  767019 system_pods.go:61] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running
	I1115 11:44:23.699200  767019 system_pods.go:61] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:44:23.699205  767019 system_pods.go:61] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running
	I1115 11:44:23.699209  767019 system_pods.go:61] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running
	I1115 11:44:23.699214  767019 system_pods.go:61] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:44:23.699218  767019 system_pods.go:61] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running
	I1115 11:44:23.699224  767019 system_pods.go:61] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:44:23.699234  767019 system_pods.go:74] duration metric: took 4.94671ms to wait for pod list to return data ...
	I1115 11:44:23.699243  767019 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:44:23.702300  767019 default_sa.go:45] found service account: "default"
	I1115 11:44:23.702331  767019 default_sa.go:55] duration metric: took 3.081974ms for default service account to be created ...
	I1115 11:44:23.702342  767019 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:44:23.706381  767019 system_pods.go:86] 8 kube-system pods found
	I1115 11:44:23.706462  767019 system_pods.go:89] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:44:23.706491  767019 system_pods.go:89] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running
	I1115 11:44:23.706526  767019 system_pods.go:89] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:44:23.706548  767019 system_pods.go:89] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running
	I1115 11:44:23.706566  767019 system_pods.go:89] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running
	I1115 11:44:23.706585  767019 system_pods.go:89] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:44:23.706610  767019 system_pods.go:89] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running
	I1115 11:44:23.706646  767019 system_pods.go:89] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:44:23.706684  767019 retry.go:31] will retry after 206.820407ms: missing components: kube-dns
	I1115 11:44:23.928081  767019 system_pods.go:86] 8 kube-system pods found
	I1115 11:44:23.928170  767019 system_pods.go:89] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:44:23.928192  767019 system_pods.go:89] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running
	I1115 11:44:23.928227  767019 system_pods.go:89] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:44:23.928248  767019 system_pods.go:89] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running
	I1115 11:44:23.928266  767019 system_pods.go:89] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running
	I1115 11:44:23.928284  767019 system_pods.go:89] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:44:23.928302  767019 system_pods.go:89] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running
	I1115 11:44:23.928333  767019 system_pods.go:89] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:44:23.928370  767019 retry.go:31] will retry after 388.958042ms: missing components: kube-dns
	I1115 11:44:24.321306  767019 system_pods.go:86] 8 kube-system pods found
	I1115 11:44:24.321337  767019 system_pods.go:89] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Running
	I1115 11:44:24.321344  767019 system_pods.go:89] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running
	I1115 11:44:24.321348  767019 system_pods.go:89] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:44:24.321353  767019 system_pods.go:89] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running
	I1115 11:44:24.321358  767019 system_pods.go:89] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running
	I1115 11:44:24.321365  767019 system_pods.go:89] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:44:24.321372  767019 system_pods.go:89] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running
	I1115 11:44:24.321376  767019 system_pods.go:89] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Running
	I1115 11:44:24.321384  767019 system_pods.go:126] duration metric: took 619.03501ms to wait for k8s-apps to be running ...
	I1115 11:44:24.321395  767019 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:44:24.321455  767019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:44:24.334836  767019 system_svc.go:56] duration metric: took 13.426059ms WaitForService to wait for kubelet
	I1115 11:44:24.334871  767019 kubeadm.go:587] duration metric: took 14.911899552s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:44:24.334891  767019 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:44:24.337933  767019 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:44:24.337968  767019 node_conditions.go:123] node cpu capacity is 2
	I1115 11:44:24.337982  767019 node_conditions.go:105] duration metric: took 3.085059ms to run NodePressure ...
	I1115 11:44:24.337996  767019 start.go:242] waiting for startup goroutines ...
	I1115 11:44:24.338003  767019 start.go:247] waiting for cluster config update ...
	I1115 11:44:24.338019  767019 start.go:256] writing updated cluster config ...
	I1115 11:44:24.338322  767019 ssh_runner.go:195] Run: rm -f paused
	I1115 11:44:24.342158  767019 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:44:24.346585  767019 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-rndhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.351716  767019 pod_ready.go:94] pod "coredns-5dd5756b68-rndhq" is "Ready"
	I1115 11:44:24.351744  767019 pod_ready.go:86] duration metric: took 5.128037ms for pod "coredns-5dd5756b68-rndhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.355243  767019 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.360603  767019 pod_ready.go:94] pod "etcd-old-k8s-version-872969" is "Ready"
	I1115 11:44:24.360630  767019 pod_ready.go:86] duration metric: took 5.346871ms for pod "etcd-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.363715  767019 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.368896  767019 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-872969" is "Ready"
	I1115 11:44:24.368918  767019 pod_ready.go:86] duration metric: took 5.17928ms for pod "kube-apiserver-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.371939  767019 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.746382  767019 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-872969" is "Ready"
	I1115 11:44:24.746408  767019 pod_ready.go:86] duration metric: took 374.444133ms for pod "kube-controller-manager-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:24.946927  767019 pod_ready.go:83] waiting for pod "kube-proxy-tgrgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:25.346524  767019 pod_ready.go:94] pod "kube-proxy-tgrgq" is "Ready"
	I1115 11:44:25.346550  767019 pod_ready.go:86] duration metric: took 399.592474ms for pod "kube-proxy-tgrgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:25.547972  767019 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:25.946420  767019 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-872969" is "Ready"
	I1115 11:44:25.946451  767019 pod_ready.go:86] duration metric: took 398.450215ms for pod "kube-scheduler-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:44:25.946465  767019 pod_ready.go:40] duration metric: took 1.604273631s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:44:26.018568  767019 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1115 11:44:26.021951  767019 out.go:203] 
	W1115 11:44:26.024979  767019 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 11:44:26.028056  767019 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 11:44:26.031985  767019 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-872969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 11:44:23 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:23.908027562Z" level=info msg="Starting container: 69b272501fd5da0baa3ad014f24bb754632395f9d812fa054d3edde7d44b001b" id=172fb6dd-acb0-4df5-8ab0-d5a30abd023c name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:44:23 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:23.910467157Z" level=info msg="Started container" PID=1939 containerID=11222af03b57a00261f4cd65f9a374d1d0784448befd57b2f396f8e06b8ca9fe description=kube-system/storage-provisioner/storage-provisioner id=6a15db49-8826-46a1-b75f-c9b544113eb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=118dd4677e4c545827f8ff42686ac24688210bbd87f3b3117d7c4ec80cb0232c
	Nov 15 11:44:23 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:23.917269644Z" level=info msg="Started container" PID=1944 containerID=69b272501fd5da0baa3ad014f24bb754632395f9d812fa054d3edde7d44b001b description=kube-system/coredns-5dd5756b68-rndhq/coredns id=172fb6dd-acb0-4df5-8ab0-d5a30abd023c name=/runtime.v1.RuntimeService/StartContainer sandboxID=814c542b781497ceec3697b0f8408f251a96a1865cbf62763b0ea201a760cbc9
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.569835859Z" level=info msg="Running pod sandbox: default/busybox/POD" id=32748bff-57a8-4e61-9c84-cdd0ca321045 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.569912094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.575068169Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e10976339d6d2392080cc8bc31194c1433aab8653f7354ca65d2e588f7aee4db UID:e38478c9-e689-4a8a-a576-f61f8d997349 NetNS:/var/run/netns/df5b48f4-8889-445c-9439-bebe5b5104f5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014b43a0}] Aliases:map[]}"
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.575260647Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.586846134Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e10976339d6d2392080cc8bc31194c1433aab8653f7354ca65d2e588f7aee4db UID:e38478c9-e689-4a8a-a576-f61f8d997349 NetNS:/var/run/netns/df5b48f4-8889-445c-9439-bebe5b5104f5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014b43a0}] Aliases:map[]}"
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.587005693Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.59124869Z" level=info msg="Ran pod sandbox e10976339d6d2392080cc8bc31194c1433aab8653f7354ca65d2e588f7aee4db with infra container: default/busybox/POD" id=32748bff-57a8-4e61-9c84-cdd0ca321045 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.593615652Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d15be58b-2051-48f7-9564-ce00d7da7db0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.593741217Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d15be58b-2051-48f7-9564-ce00d7da7db0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.593783465Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d15be58b-2051-48f7-9564-ce00d7da7db0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.594674816Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4ada2b86-71dc-46e1-a54d-228cd9bc2daf name=/runtime.v1.ImageService/PullImage
	Nov 15 11:44:26 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:26.597058221Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.785042453Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4ada2b86-71dc-46e1-a54d-228cd9bc2daf name=/runtime.v1.ImageService/PullImage
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.788172665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bd3739da-5317-4921-88e4-53a7c7dd3908 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.795704441Z" level=info msg="Creating container: default/busybox/busybox" id=9bf5db45-0490-4dcc-93bf-3331542d0cdd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.796014771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.801223679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.801798021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.816523812Z" level=info msg="Created container 2632e9c36d362a813e54ab1deaf1a823cdc23f0568baad5e901cffa37468e1e5: default/busybox/busybox" id=9bf5db45-0490-4dcc-93bf-3331542d0cdd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.817896473Z" level=info msg="Starting container: 2632e9c36d362a813e54ab1deaf1a823cdc23f0568baad5e901cffa37468e1e5" id=24b5a714-45fd-4a85-b1c5-5b32d7fe95b9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:44:28 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:28.81964257Z" level=info msg="Started container" PID=2001 containerID=2632e9c36d362a813e54ab1deaf1a823cdc23f0568baad5e901cffa37468e1e5 description=default/busybox/busybox id=24b5a714-45fd-4a85-b1c5-5b32d7fe95b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e10976339d6d2392080cc8bc31194c1433aab8653f7354ca65d2e588f7aee4db
	Nov 15 11:44:34 old-k8s-version-872969 crio[836]: time="2025-11-15T11:44:34.472936037Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	2632e9c36d362       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   e10976339d6d2       busybox                                          default
	69b272501fd5d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   814c542b78149       coredns-5dd5756b68-rndhq                         kube-system
	11222af03b57a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   118dd4677e4c5       storage-provisioner                              kube-system
	1bad6b6d5a9b7       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    22 seconds ago      Running             kindnet-cni               0                   81c255fa202b9       kindnet-zmkg5                                    kube-system
	058833aebd0ec       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      25 seconds ago      Running             kube-proxy                0                   347eaf956ec7f       kube-proxy-tgrgq                                 kube-system
	36d45b13defe4       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   1a67325f79d9f       kube-controller-manager-old-k8s-version-872969   kube-system
	19ebfcd3ea6d8       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   cc5691026d158       kube-scheduler-old-k8s-version-872969            kube-system
	e11c86cba90ad       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   64bda4c0ff27c       etcd-old-k8s-version-872969                      kube-system
	906c9b273117f       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   4324806b252b3       kube-apiserver-old-k8s-version-872969            kube-system
	
	
	==> coredns [69b272501fd5da0baa3ad014f24bb754632395f9d812fa054d3edde7d44b001b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49485 - 27805 "HINFO IN 7585335716560207922.3740831681474150038. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030948102s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-872969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-872969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=old-k8s-version-872969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_43_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:43:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-872969
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:44:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:44:26 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:44:26 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:44:26 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:44:26 +0000   Sat, 15 Nov 2025 11:44:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-872969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                3b68266a-d7a6-4882-86de-e8553ea8772d
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-rndhq                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-872969                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-zmkg5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-872969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-872969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-tgrgq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-872969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-872969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node old-k8s-version-872969 event: Registered Node old-k8s-version-872969 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-872969 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 11:16] overlayfs: idmapped layers are currently not supported
	[Nov15 11:18] overlayfs: idmapped layers are currently not supported
	[Nov15 11:22] overlayfs: idmapped layers are currently not supported
	[Nov15 11:23] overlayfs: idmapped layers are currently not supported
	[Nov15 11:24] overlayfs: idmapped layers are currently not supported
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e11c86cba90ad9089aeb4c45211a974554a3f6ce626b038d1d2c3047af945de8] <==
	{"level":"info","ts":"2025-11-15T11:43:48.53177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-15T11:43:48.531923Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-15T11:43:48.533007Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-15T11:43:48.533146Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:43:48.533302Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:43:48.534052Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T11:43:48.534127Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T11:43:49.312748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-15T11:43:49.312871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-15T11:43:49.312939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-15T11:43:49.312979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-15T11:43:49.313014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T11:43:49.313053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-15T11:43:49.313092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T11:43:49.324613Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-872969 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T11:43:49.326642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T11:43:49.328156Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T11:43:49.33106Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T11:43:49.336641Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T11:43:49.338102Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T11:43:49.338362Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T11:43:49.338479Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T11:43:49.345166Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T11:43:49.345308Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T11:43:49.345357Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:44:36 up  3:27,  0 user,  load average: 3.20, 3.48, 2.77
	Linux old-k8s-version-872969 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1bad6b6d5a9b7388d1bdff5008f46ac78ff83bb063b1ea745d7d377b0e04dd9c] <==
	I1115 11:44:13.092442       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:44:13.093079       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:44:13.093243       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:44:13.093261       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:44:13.093274       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:44:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:44:13.300748       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:44:13.388956       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:44:13.389057       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:44:13.399424       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 11:44:13.495049       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:44:13.495080       1 metrics.go:72] Registering metrics
	I1115 11:44:13.495135       1 controller.go:711] "Syncing nftables rules"
	I1115 11:44:23.306433       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:44:23.306490       1 main.go:301] handling current node
	I1115 11:44:33.300287       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:44:33.300319       1 main.go:301] handling current node
	
	
	==> kube-apiserver [906c9b273117f499d513b6311b1613e1ae22249169f8e62d668d87ec71de55bd] <==
	I1115 11:43:52.895533       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 11:43:52.895618       1 shared_informer.go:318] Caches are synced for configmaps
	I1115 11:43:52.899638       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 11:43:52.901995       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1115 11:43:52.902756       1 aggregator.go:166] initial CRD sync complete...
	I1115 11:43:52.904917       1 autoregister_controller.go:141] Starting autoregister controller
	I1115 11:43:52.904961       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:43:52.905009       1 cache.go:39] Caches are synced for autoregister controller
	E1115 11:43:52.912181       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1115 11:43:53.116394       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:43:53.491107       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 11:43:53.496653       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 11:43:53.496676       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:43:54.109889       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:43:54.159321       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:43:54.227215       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 11:43:54.234678       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 11:43:54.235694       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 11:43:54.243710       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:43:54.988728       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 11:43:55.820329       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 11:43:55.841061       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 11:43:55.861390       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1115 11:44:09.688361       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1115 11:44:09.748798       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [36d45b13defe4fd2e49662502534eb4ee09d3fc1d83b6715dfe2de32537b3b56] <==
	I1115 11:44:09.088185       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-872969" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1115 11:44:09.088269       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-872969" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1115 11:44:09.088340       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-872969" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1115 11:44:09.431795       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 11:44:09.468381       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 11:44:09.468420       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 11:44:09.740425       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tgrgq"
	I1115 11:44:09.750935       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zmkg5"
	I1115 11:44:09.772765       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1115 11:44:09.919826       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fsdgm"
	I1115 11:44:09.947797       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rndhq"
	I1115 11:44:09.970511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="194.586651ms"
	I1115 11:44:10.050712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.150292ms"
	I1115 11:44:10.050886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="135.985µs"
	I1115 11:44:10.722824       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1115 11:44:10.778335       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-fsdgm"
	I1115 11:44:10.803655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.540385ms"
	I1115 11:44:10.829396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.687289ms"
	I1115 11:44:10.829491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.53µs"
	I1115 11:44:10.830308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.845µs"
	I1115 11:44:23.538718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.902µs"
	I1115 11:44:23.553646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.731µs"
	I1115 11:44:24.063878       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1115 11:44:24.208488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.184202ms"
	I1115 11:44:24.209715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.809µs"
	
	
	==> kube-proxy [058833aebd0ec2e510d0a64c100b204fe1b4a852e2ddf62798b6a9e14ed939ae] <==
	I1115 11:44:10.370270       1 server_others.go:69] "Using iptables proxy"
	I1115 11:44:10.400709       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 11:44:10.462010       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:44:10.464222       1 server_others.go:152] "Using iptables Proxier"
	I1115 11:44:10.464256       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 11:44:10.464264       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 11:44:10.464295       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 11:44:10.464499       1 server.go:846] "Version info" version="v1.28.0"
	I1115 11:44:10.464521       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:44:10.466929       1 config.go:188] "Starting service config controller"
	I1115 11:44:10.466960       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 11:44:10.466980       1 config.go:97] "Starting endpoint slice config controller"
	I1115 11:44:10.466984       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 11:44:10.467446       1 config.go:315] "Starting node config controller"
	I1115 11:44:10.467453       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 11:44:10.568386       1 shared_informer.go:318] Caches are synced for node config
	I1115 11:44:10.568418       1 shared_informer.go:318] Caches are synced for service config
	I1115 11:44:10.568454       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [19ebfcd3ea6d84c1ecd48fea46cca061fd95a3903d6212ba329ec685e1e824e4] <==
	W1115 11:43:52.850223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1115 11:43:52.850274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1115 11:43:52.850397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1115 11:43:52.850448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1115 11:43:52.860938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1115 11:43:52.860977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1115 11:43:52.878782       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1115 11:43:52.879106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1115 11:43:52.879485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1115 11:43:52.879523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1115 11:43:52.879807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1115 11:43:52.879828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1115 11:43:52.879968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1115 11:43:52.879981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1115 11:43:52.880192       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1115 11:43:52.880249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1115 11:43:52.880964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1115 11:43:52.884488       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1115 11:43:53.644106       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1115 11:43:53.644226       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 11:43:53.747860       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1115 11:43:53.747997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1115 11:43:53.828006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1115 11:43:53.828114       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1115 11:43:56.512955       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.767739    1373 topology_manager.go:215] "Topology Admit Handler" podUID="f8984361-3dcd-41a6-bc3b-cd185d25b7b6" podNamespace="kube-system" podName="kube-proxy-tgrgq"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.774248    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8984361-3dcd-41a6-bc3b-cd185d25b7b6-xtables-lock\") pod \"kube-proxy-tgrgq\" (UID: \"f8984361-3dcd-41a6-bc3b-cd185d25b7b6\") " pod="kube-system/kube-proxy-tgrgq"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.774307    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64s8c\" (UniqueName: \"kubernetes.io/projected/f8984361-3dcd-41a6-bc3b-cd185d25b7b6-kube-api-access-64s8c\") pod \"kube-proxy-tgrgq\" (UID: \"f8984361-3dcd-41a6-bc3b-cd185d25b7b6\") " pod="kube-system/kube-proxy-tgrgq"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.774335    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8984361-3dcd-41a6-bc3b-cd185d25b7b6-kube-proxy\") pod \"kube-proxy-tgrgq\" (UID: \"f8984361-3dcd-41a6-bc3b-cd185d25b7b6\") " pod="kube-system/kube-proxy-tgrgq"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.774360    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8984361-3dcd-41a6-bc3b-cd185d25b7b6-lib-modules\") pod \"kube-proxy-tgrgq\" (UID: \"f8984361-3dcd-41a6-bc3b-cd185d25b7b6\") " pod="kube-system/kube-proxy-tgrgq"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.781924    1373 topology_manager.go:215] "Topology Admit Handler" podUID="623da114-560f-4888-a498-ef271e3da582" podNamespace="kube-system" podName="kindnet-zmkg5"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.876622    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdj9z\" (UniqueName: \"kubernetes.io/projected/623da114-560f-4888-a498-ef271e3da582-kube-api-access-vdj9z\") pod \"kindnet-zmkg5\" (UID: \"623da114-560f-4888-a498-ef271e3da582\") " pod="kube-system/kindnet-zmkg5"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.876699    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/623da114-560f-4888-a498-ef271e3da582-xtables-lock\") pod \"kindnet-zmkg5\" (UID: \"623da114-560f-4888-a498-ef271e3da582\") " pod="kube-system/kindnet-zmkg5"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.876726    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/623da114-560f-4888-a498-ef271e3da582-cni-cfg\") pod \"kindnet-zmkg5\" (UID: \"623da114-560f-4888-a498-ef271e3da582\") " pod="kube-system/kindnet-zmkg5"
	Nov 15 11:44:09 old-k8s-version-872969 kubelet[1373]: I1115 11:44:09.876825    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/623da114-560f-4888-a498-ef271e3da582-lib-modules\") pod \"kindnet-zmkg5\" (UID: \"623da114-560f-4888-a498-ef271e3da582\") " pod="kube-system/kindnet-zmkg5"
	Nov 15 11:44:10 old-k8s-version-872969 kubelet[1373]: W1115 11:44:10.126524    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/crio-81c255fa202b9a69c97a089ca7d65a7f032cb7a2d06f444b57ef5ca73e0d4b6d WatchSource:0}: Error finding container 81c255fa202b9a69c97a089ca7d65a7f032cb7a2d06f444b57ef5ca73e0d4b6d: Status 404 returned error can't find the container with id 81c255fa202b9a69c97a089ca7d65a7f032cb7a2d06f444b57ef5ca73e0d4b6d
	Nov 15 11:44:11 old-k8s-version-872969 kubelet[1373]: I1115 11:44:11.139813    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tgrgq" podStartSLOduration=2.139768827 podCreationTimestamp="2025-11-15 11:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:44:11.139668403 +0000 UTC m=+15.362024487" watchObservedRunningTime="2025-11-15 11:44:11.139768827 +0000 UTC m=+15.362124919"
	Nov 15 11:44:16 old-k8s-version-872969 kubelet[1373]: I1115 11:44:16.035035    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zmkg5" podStartSLOduration=4.216715186 podCreationTimestamp="2025-11-15 11:44:09 +0000 UTC" firstStartedPulling="2025-11-15 11:44:10.131200206 +0000 UTC m=+14.353556290" lastFinishedPulling="2025-11-15 11:44:12.949475489 +0000 UTC m=+17.171831572" observedRunningTime="2025-11-15 11:44:13.155666461 +0000 UTC m=+17.378022553" watchObservedRunningTime="2025-11-15 11:44:16.034990468 +0000 UTC m=+20.257346552"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: I1115 11:44:23.504432    1373 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: I1115 11:44:23.537051    1373 topology_manager.go:215] "Topology Admit Handler" podUID="5de00329-d0e0-48be-9d3d-39b760cb0ea8" podNamespace="kube-system" podName="coredns-5dd5756b68-rndhq"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: I1115 11:44:23.541554    1373 topology_manager.go:215] "Topology Admit Handler" podUID="ba1eb52a-c93b-4fbf-981e-58bf5de71141" podNamespace="kube-system" podName="storage-provisioner"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: I1115 11:44:23.693113    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmj2\" (UniqueName: \"kubernetes.io/projected/5de00329-d0e0-48be-9d3d-39b760cb0ea8-kube-api-access-nhmj2\") pod \"coredns-5dd5756b68-rndhq\" (UID: \"5de00329-d0e0-48be-9d3d-39b760cb0ea8\") " pod="kube-system/coredns-5dd5756b68-rndhq"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: I1115 11:44:23.693229    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de00329-d0e0-48be-9d3d-39b760cb0ea8-config-volume\") pod \"coredns-5dd5756b68-rndhq\" (UID: \"5de00329-d0e0-48be-9d3d-39b760cb0ea8\") " pod="kube-system/coredns-5dd5756b68-rndhq"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: I1115 11:44:23.693262    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ba1eb52a-c93b-4fbf-981e-58bf5de71141-tmp\") pod \"storage-provisioner\" (UID: \"ba1eb52a-c93b-4fbf-981e-58bf5de71141\") " pod="kube-system/storage-provisioner"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: I1115 11:44:23.693331    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68bzn\" (UniqueName: \"kubernetes.io/projected/ba1eb52a-c93b-4fbf-981e-58bf5de71141-kube-api-access-68bzn\") pod \"storage-provisioner\" (UID: \"ba1eb52a-c93b-4fbf-981e-58bf5de71141\") " pod="kube-system/storage-provisioner"
	Nov 15 11:44:23 old-k8s-version-872969 kubelet[1373]: W1115 11:44:23.864245    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/crio-814c542b781497ceec3697b0f8408f251a96a1865cbf62763b0ea201a760cbc9 WatchSource:0}: Error finding container 814c542b781497ceec3697b0f8408f251a96a1865cbf62763b0ea201a760cbc9: Status 404 returned error can't find the container with id 814c542b781497ceec3697b0f8408f251a96a1865cbf62763b0ea201a760cbc9
	Nov 15 11:44:24 old-k8s-version-872969 kubelet[1373]: I1115 11:44:24.193554    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.193512401 podCreationTimestamp="2025-11-15 11:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:44:24.180077538 +0000 UTC m=+28.402433621" watchObservedRunningTime="2025-11-15 11:44:24.193512401 +0000 UTC m=+28.415868484"
	Nov 15 11:44:26 old-k8s-version-872969 kubelet[1373]: I1115 11:44:26.266143    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rndhq" podStartSLOduration=17.266074126 podCreationTimestamp="2025-11-15 11:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:44:24.19472675 +0000 UTC m=+28.417082834" watchObservedRunningTime="2025-11-15 11:44:26.266074126 +0000 UTC m=+30.488430209"
	Nov 15 11:44:26 old-k8s-version-872969 kubelet[1373]: I1115 11:44:26.267732    1373 topology_manager.go:215] "Topology Admit Handler" podUID="e38478c9-e689-4a8a-a576-f61f8d997349" podNamespace="default" podName="busybox"
	Nov 15 11:44:26 old-k8s-version-872969 kubelet[1373]: I1115 11:44:26.311288    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mh5\" (UniqueName: \"kubernetes.io/projected/e38478c9-e689-4a8a-a576-f61f8d997349-kube-api-access-w5mh5\") pod \"busybox\" (UID: \"e38478c9-e689-4a8a-a576-f61f8d997349\") " pod="default/busybox"
	
	
	==> storage-provisioner [11222af03b57a00261f4cd65f9a374d1d0784448befd57b2f396f8e06b8ca9fe] <==
	I1115 11:44:23.941178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:44:23.955707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:44:23.955918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 11:44:23.965802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:44:23.966482       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-872969_185781b2-35f9-4a82-ac99-d978993643c4!
	I1115 11:44:23.966322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81006117-e52a-4d02-8262-09cc8cbb9b80", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-872969_185781b2-35f9-4a82-ac99-d978993643c4 became leader
	I1115 11:44:24.067040       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-872969_185781b2-35f9-4a82-ac99-d978993643c4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-872969 -n old-k8s-version-872969
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-872969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-872969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-872969 --alsologtostderr -v=1: exit status 80 (2.108267522s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-872969 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:45:50.170815  772854 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:45:50.170932  772854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:45:50.170939  772854 out.go:374] Setting ErrFile to fd 2...
	I1115 11:45:50.170944  772854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:45:50.171187  772854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:45:50.171432  772854 out.go:368] Setting JSON to false
	I1115 11:45:50.171458  772854 mustload.go:66] Loading cluster: old-k8s-version-872969
	I1115 11:45:50.171844  772854 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:45:50.172299  772854 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:45:50.192045  772854 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:45:50.192372  772854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:45:50.261252  772854 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:45:50.251501398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:45:50.262956  772854 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-872969 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 11:45:50.266417  772854 out.go:179] * Pausing node old-k8s-version-872969 ... 
	I1115 11:45:50.270150  772854 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:45:50.270500  772854 ssh_runner.go:195] Run: systemctl --version
	I1115 11:45:50.270550  772854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:45:50.288335  772854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:45:50.391729  772854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:45:50.406580  772854 pause.go:52] kubelet running: true
	I1115 11:45:50.406644  772854 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:45:50.656325  772854 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:45:50.656423  772854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:45:50.730781  772854 cri.go:89] found id: "02d2b5f4938e4f727c9ac593d2f34708d74a396c7d94efc50fd6294d92974c8a"
	I1115 11:45:50.730805  772854 cri.go:89] found id: "804eef2c91baf9184c8c7c9e054bbd57aa28d6dd39d97e7b71f4fb811d18ba99"
	I1115 11:45:50.730810  772854 cri.go:89] found id: "ac16149e5827319ea9d87af9b598f570997a123f2b974665d5b967913bb2c2fe"
	I1115 11:45:50.730813  772854 cri.go:89] found id: "c918d4e74a7df8005e44d4f479b40a185931fc4b765b4b81137bbcdb61810076"
	I1115 11:45:50.730817  772854 cri.go:89] found id: "32c5d8ff7931f51b39e4d677f3fa8990d64a8ce4f501f08b17cffa0a306cd10b"
	I1115 11:45:50.730820  772854 cri.go:89] found id: "7aa6ea3c1cb8bb9a148b615de44117cceecc46195098acef3b88d91075fe34dc"
	I1115 11:45:50.730823  772854 cri.go:89] found id: "14b3db3b917aa89b721a2b8851a6103f8c835a7cdcbc14441bc08d8ffa25f1c3"
	I1115 11:45:50.730827  772854 cri.go:89] found id: "b6f31762d19d0b4a10be610346a7111f70357651c16509121df8b4ff7215a71f"
	I1115 11:45:50.730834  772854 cri.go:89] found id: "a4ec05da29c9d2dea7b01be81b7223bb05c63336ca59bfa38a4235ac8c2ea05f"
	I1115 11:45:50.730841  772854 cri.go:89] found id: "68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2"
	I1115 11:45:50.730844  772854 cri.go:89] found id: "db82c7b1afbcf518b92148f64445ec7c70f683303623edfa8dd13a0497384658"
	I1115 11:45:50.730847  772854 cri.go:89] found id: ""
	I1115 11:45:50.730896  772854 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:45:50.750285  772854 retry.go:31] will retry after 369.945242ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:45:50Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:45:51.120938  772854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:45:51.135684  772854 pause.go:52] kubelet running: false
	I1115 11:45:51.135753  772854 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:45:51.320141  772854 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:45:51.320237  772854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:45:51.398449  772854 cri.go:89] found id: "02d2b5f4938e4f727c9ac593d2f34708d74a396c7d94efc50fd6294d92974c8a"
	I1115 11:45:51.398476  772854 cri.go:89] found id: "804eef2c91baf9184c8c7c9e054bbd57aa28d6dd39d97e7b71f4fb811d18ba99"
	I1115 11:45:51.398482  772854 cri.go:89] found id: "ac16149e5827319ea9d87af9b598f570997a123f2b974665d5b967913bb2c2fe"
	I1115 11:45:51.398486  772854 cri.go:89] found id: "c918d4e74a7df8005e44d4f479b40a185931fc4b765b4b81137bbcdb61810076"
	I1115 11:45:51.398490  772854 cri.go:89] found id: "32c5d8ff7931f51b39e4d677f3fa8990d64a8ce4f501f08b17cffa0a306cd10b"
	I1115 11:45:51.398494  772854 cri.go:89] found id: "7aa6ea3c1cb8bb9a148b615de44117cceecc46195098acef3b88d91075fe34dc"
	I1115 11:45:51.398497  772854 cri.go:89] found id: "14b3db3b917aa89b721a2b8851a6103f8c835a7cdcbc14441bc08d8ffa25f1c3"
	I1115 11:45:51.398500  772854 cri.go:89] found id: "b6f31762d19d0b4a10be610346a7111f70357651c16509121df8b4ff7215a71f"
	I1115 11:45:51.398504  772854 cri.go:89] found id: "a4ec05da29c9d2dea7b01be81b7223bb05c63336ca59bfa38a4235ac8c2ea05f"
	I1115 11:45:51.398511  772854 cri.go:89] found id: "68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2"
	I1115 11:45:51.398515  772854 cri.go:89] found id: "db82c7b1afbcf518b92148f64445ec7c70f683303623edfa8dd13a0497384658"
	I1115 11:45:51.398519  772854 cri.go:89] found id: ""
	I1115 11:45:51.398569  772854 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:45:51.410028  772854 retry.go:31] will retry after 489.800085ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:45:51Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:45:51.900623  772854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:45:51.913734  772854 pause.go:52] kubelet running: false
	I1115 11:45:51.913797  772854 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:45:52.111353  772854 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:45:52.111432  772854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:45:52.191813  772854 cri.go:89] found id: "02d2b5f4938e4f727c9ac593d2f34708d74a396c7d94efc50fd6294d92974c8a"
	I1115 11:45:52.191840  772854 cri.go:89] found id: "804eef2c91baf9184c8c7c9e054bbd57aa28d6dd39d97e7b71f4fb811d18ba99"
	I1115 11:45:52.191846  772854 cri.go:89] found id: "ac16149e5827319ea9d87af9b598f570997a123f2b974665d5b967913bb2c2fe"
	I1115 11:45:52.191851  772854 cri.go:89] found id: "c918d4e74a7df8005e44d4f479b40a185931fc4b765b4b81137bbcdb61810076"
	I1115 11:45:52.191854  772854 cri.go:89] found id: "32c5d8ff7931f51b39e4d677f3fa8990d64a8ce4f501f08b17cffa0a306cd10b"
	I1115 11:45:52.191859  772854 cri.go:89] found id: "7aa6ea3c1cb8bb9a148b615de44117cceecc46195098acef3b88d91075fe34dc"
	I1115 11:45:52.191862  772854 cri.go:89] found id: "14b3db3b917aa89b721a2b8851a6103f8c835a7cdcbc14441bc08d8ffa25f1c3"
	I1115 11:45:52.191886  772854 cri.go:89] found id: "b6f31762d19d0b4a10be610346a7111f70357651c16509121df8b4ff7215a71f"
	I1115 11:45:52.191897  772854 cri.go:89] found id: "a4ec05da29c9d2dea7b01be81b7223bb05c63336ca59bfa38a4235ac8c2ea05f"
	I1115 11:45:52.191905  772854 cri.go:89] found id: "68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2"
	I1115 11:45:52.191909  772854 cri.go:89] found id: "db82c7b1afbcf518b92148f64445ec7c70f683303623edfa8dd13a0497384658"
	I1115 11:45:52.191918  772854 cri.go:89] found id: ""
	I1115 11:45:52.192006  772854 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:45:52.207710  772854 out.go:203] 
	W1115 11:45:52.210679  772854 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:45:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:45:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 11:45:52.210709  772854 out.go:285] * 
	* 
	W1115 11:45:52.217353  772854 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 11:45:52.220314  772854 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-872969 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-872969
helpers_test.go:243: (dbg) docker inspect old-k8s-version-872969:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80",
	        "Created": "2025-11-15T11:43:29.514556564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 770756,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:44:49.619390047Z",
	            "FinishedAt": "2025-11-15T11:44:48.764971449Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/hostname",
	        "HostsPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/hosts",
	        "LogPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80-json.log",
	        "Name": "/old-k8s-version-872969",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-872969:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-872969",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80",
	                "LowerDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-872969",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-872969/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-872969",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-872969",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-872969",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0414405e20a115ccf2cbb5a5ec547187ddeb06230af5e0bce656cafb5dcaa07d",
	            "SandboxKey": "/var/run/docker/netns/0414405e20a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-872969": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:d5:e5:ac:0c:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fe74aaea9f1ff898d8b3c6c329ef26fb68a67a4e5377e568964777357f485456",
	                    "EndpointID": "d18e01abc72b7896dee36c7182429028ddf2de9f688cddf521283e737abb492e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-872969",
	                        "661ed5bad40f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969: exit status 2 (349.21481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-872969 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-872969 logs -n 25: (1.417180079s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-949287 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo containerd config dump                                                                                                                                                                                                  │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo crio config                                                                                                                                                                                                             │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ delete  │ -p cilium-949287                                                                                                                                                                                                                              │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:41 UTC │
	│ start   │ -p force-systemd-env-386707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-386707  │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:42 UTC │
	│ delete  │ -p kubernetes-upgrade-436490                                                                                                                                                                                                                  │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-636406    │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p force-systemd-env-386707                                                                                                                                                                                                                   │ force-systemd-env-386707  │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-options-303284 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ cert-options-303284 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ -p cert-options-303284 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p cert-options-303284                                                                                                                                                                                                                        │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │                     │
	│ stop    │ -p old-k8s-version-872969 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-872969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:44:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:44:49.333092  770629 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:44:49.333260  770629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:44:49.333291  770629 out.go:374] Setting ErrFile to fd 2...
	I1115 11:44:49.333310  770629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:44:49.333602  770629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:44:49.334018  770629 out.go:368] Setting JSON to false
	I1115 11:44:49.334948  770629 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12440,"bootTime":1763194649,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:44:49.335048  770629 start.go:143] virtualization:  
	I1115 11:44:49.338709  770629 out.go:179] * [old-k8s-version-872969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:44:49.342864  770629 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:44:49.342978  770629 notify.go:221] Checking for updates...
	I1115 11:44:49.348939  770629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:44:49.351816  770629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:44:49.354696  770629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:44:49.357547  770629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:44:49.360413  770629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:44:49.363816  770629 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:44:49.367311  770629 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1115 11:44:49.370126  770629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:44:49.400684  770629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:44:49.400839  770629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:44:49.465656  770629 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:44:49.455604735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:44:49.465776  770629 docker.go:319] overlay module found
	I1115 11:44:49.468917  770629 out.go:179] * Using the docker driver based on existing profile
	I1115 11:44:49.471718  770629 start.go:309] selected driver: docker
	I1115 11:44:49.471734  770629 start.go:930] validating driver "docker" against &{Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:44:49.471847  770629 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:44:49.472589  770629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:44:49.531888  770629 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:44:49.522191791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:44:49.532234  770629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:44:49.532268  770629 cni.go:84] Creating CNI manager for ""
	I1115 11:44:49.532326  770629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:44:49.532366  770629 start.go:353] cluster config:
	{Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:44:49.537383  770629 out.go:179] * Starting "old-k8s-version-872969" primary control-plane node in "old-k8s-version-872969" cluster
	I1115 11:44:49.540209  770629 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:44:49.543129  770629 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:44:49.545941  770629 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 11:44:49.545990  770629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 11:44:49.546015  770629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:44:49.546020  770629 cache.go:65] Caching tarball of preloaded images
	I1115 11:44:49.546108  770629 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:44:49.546117  770629 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 11:44:49.546226  770629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/config.json ...
	I1115 11:44:49.565710  770629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:44:49.565731  770629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:44:49.565744  770629 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:44:49.565769  770629 start.go:360] acquireMachinesLock for old-k8s-version-872969: {Name:mk8e7def530b80cef5a2809f08776681cf0304db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:44:49.565829  770629 start.go:364] duration metric: took 36.325µs to acquireMachinesLock for "old-k8s-version-872969"
	I1115 11:44:49.565852  770629 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:44:49.565858  770629 fix.go:54] fixHost starting: 
	I1115 11:44:49.566123  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:49.584666  770629 fix.go:112] recreateIfNeeded on old-k8s-version-872969: state=Stopped err=<nil>
	W1115 11:44:49.584695  770629 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:44:49.587907  770629 out.go:252] * Restarting existing docker container for "old-k8s-version-872969" ...
	I1115 11:44:49.587999  770629 cli_runner.go:164] Run: docker start old-k8s-version-872969
	I1115 11:44:49.868388  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:49.888723  770629 kic.go:430] container "old-k8s-version-872969" state is running.
	I1115 11:44:49.889174  770629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:44:49.911307  770629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/config.json ...
	I1115 11:44:49.911527  770629 machine.go:94] provisionDockerMachine start ...
	I1115 11:44:49.911594  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:49.933303  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:49.933627  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:49.933636  770629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:44:49.934430  770629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:44:53.096551  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-872969
	
	I1115 11:44:53.096575  770629 ubuntu.go:182] provisioning hostname "old-k8s-version-872969"
	I1115 11:44:53.096641  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:53.114684  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:53.115008  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:53.115025  770629 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-872969 && echo "old-k8s-version-872969" | sudo tee /etc/hostname
	I1115 11:44:53.282908  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-872969
	
	I1115 11:44:53.282987  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:53.305094  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:53.305403  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:53.305425  770629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-872969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-872969/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-872969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:44:53.456951  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:44:53.456975  770629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:44:53.456996  770629 ubuntu.go:190] setting up certificates
	I1115 11:44:53.457006  770629 provision.go:84] configureAuth start
	I1115 11:44:53.457080  770629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:44:53.473443  770629 provision.go:143] copyHostCerts
	I1115 11:44:53.473510  770629 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:44:53.473541  770629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:44:53.473622  770629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:44:53.473728  770629 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:44:53.473738  770629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:44:53.473765  770629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:44:53.473828  770629 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:44:53.473836  770629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:44:53.473860  770629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:44:53.473913  770629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-872969 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-872969]
	I1115 11:44:54.042162  770629 provision.go:177] copyRemoteCerts
	I1115 11:44:54.042232  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:44:54.042287  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.059987  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.165561  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:44:54.184515  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:44:54.202251  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 11:44:54.219770  770629 provision.go:87] duration metric: took 762.746321ms to configureAuth
	I1115 11:44:54.219795  770629 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:44:54.220027  770629 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:44:54.220135  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.241166  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:54.241480  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:54.241499  770629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:44:54.565837  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:44:54.565875  770629 machine.go:97] duration metric: took 4.654329647s to provisionDockerMachine
	I1115 11:44:54.565888  770629 start.go:293] postStartSetup for "old-k8s-version-872969" (driver="docker")
	I1115 11:44:54.565898  770629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:44:54.565960  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:44:54.566008  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.586235  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.699004  770629 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:44:54.703163  770629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:44:54.703194  770629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:44:54.703206  770629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:44:54.703259  770629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:44:54.703351  770629 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:44:54.703458  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:44:54.712651  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:44:54.731751  770629 start.go:296] duration metric: took 165.847117ms for postStartSetup
	I1115 11:44:54.731829  770629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:44:54.731874  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.756093  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.857832  770629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:44:54.862340  770629 fix.go:56] duration metric: took 5.296474151s for fixHost
	I1115 11:44:54.862362  770629 start.go:83] releasing machines lock for "old-k8s-version-872969", held for 5.296521692s
	I1115 11:44:54.862427  770629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:44:54.879098  770629 ssh_runner.go:195] Run: cat /version.json
	I1115 11:44:54.879165  770629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:44:54.879225  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.879183  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.905037  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.907862  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:55.107442  770629 ssh_runner.go:195] Run: systemctl --version
	I1115 11:44:55.114562  770629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:44:55.151283  770629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:44:55.155948  770629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:44:55.156036  770629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:44:55.163961  770629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:44:55.164028  770629 start.go:496] detecting cgroup driver to use...
	I1115 11:44:55.164076  770629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:44:55.164157  770629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:44:55.179127  770629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:44:55.192731  770629 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:44:55.192836  770629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:44:55.209195  770629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:44:55.223930  770629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:44:55.347381  770629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:44:55.467933  770629 docker.go:234] disabling docker service ...
	I1115 11:44:55.468065  770629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:44:55.485113  770629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:44:55.500411  770629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:44:55.627149  770629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:44:55.753742  770629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:44:55.766823  770629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:44:55.780490  770629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 11:44:55.780575  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.789454  770629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:44:55.789593  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.798976  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.807563  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.816271  770629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:44:55.824293  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.833014  770629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.841712  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.850263  770629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:44:55.857538  770629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:44:55.864850  770629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:44:55.983096  770629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:44:56.125340  770629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:44:56.125413  770629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:44:56.129485  770629 start.go:564] Will wait 60s for crictl version
	I1115 11:44:56.129549  770629 ssh_runner.go:195] Run: which crictl
	I1115 11:44:56.133545  770629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:44:56.162386  770629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:44:56.162494  770629 ssh_runner.go:195] Run: crio --version
	I1115 11:44:56.190460  770629 ssh_runner.go:195] Run: crio --version
	I1115 11:44:56.223914  770629 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 11:44:56.226681  770629 cli_runner.go:164] Run: docker network inspect old-k8s-version-872969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:44:56.242406  770629 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:44:56.246458  770629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:44:56.255895  770629 kubeadm.go:884] updating cluster {Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:44:56.256012  770629 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 11:44:56.256067  770629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:44:56.289772  770629 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:44:56.289792  770629 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:44:56.289855  770629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:44:56.318244  770629 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:44:56.318268  770629 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:44:56.318275  770629 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 11:44:56.318377  770629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-872969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:44:56.318458  770629 ssh_runner.go:195] Run: crio config
	I1115 11:44:56.396507  770629 cni.go:84] Creating CNI manager for ""
	I1115 11:44:56.396571  770629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:44:56.396624  770629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:44:56.396675  770629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-872969 NodeName:old-k8s-version-872969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:44:56.396911  770629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-872969"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:44:56.397017  770629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 11:44:56.404672  770629 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:44:56.404738  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:44:56.411978  770629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1115 11:44:56.424420  770629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:44:56.439553  770629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1115 11:44:56.452009  770629 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:44:56.455855  770629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:44:56.465901  770629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:44:56.586871  770629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:44:56.608556  770629 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969 for IP: 192.168.85.2
	I1115 11:44:56.608618  770629 certs.go:195] generating shared ca certs ...
	I1115 11:44:56.608647  770629 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:56.608826  770629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:44:56.608964  770629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:44:56.608993  770629 certs.go:257] generating profile certs ...
	I1115 11:44:56.609132  770629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.key
	I1115 11:44:56.609217  770629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key.5f4bae20
	I1115 11:44:56.609294  770629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key
	I1115 11:44:56.609454  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:44:56.609519  770629 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:44:56.609562  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:44:56.609610  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:44:56.609649  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:44:56.609701  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:44:56.609782  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:44:56.610426  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:44:56.638083  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:44:56.659295  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:44:56.680809  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:44:56.708401  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:44:56.734153  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:44:56.762376  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:44:56.796717  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:44:56.819726  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:44:56.842201  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:44:56.861962  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:44:56.881525  770629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:44:56.895303  770629 ssh_runner.go:195] Run: openssl version
	I1115 11:44:56.902201  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:44:56.910299  770629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:44:56.913862  770629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:44:56.913948  770629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:44:56.956329  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:44:56.964512  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:44:56.972977  770629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:44:56.976680  770629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:44:56.976791  770629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:44:57.017926  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:44:57.026012  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:44:57.034279  770629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:44:57.037955  770629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:44:57.038052  770629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:44:57.086437  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:44:57.094622  770629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:44:57.098630  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:44:57.143364  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:44:57.186284  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:44:57.229823  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:44:57.281264  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:44:57.330469  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:44:57.402163  770629 kubeadm.go:401] StartCluster: {Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:44:57.402300  770629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:44:57.402394  770629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:44:57.478070  770629 cri.go:89] found id: "7aa6ea3c1cb8bb9a148b615de44117cceecc46195098acef3b88d91075fe34dc"
	I1115 11:44:57.478136  770629 cri.go:89] found id: "14b3db3b917aa89b721a2b8851a6103f8c835a7cdcbc14441bc08d8ffa25f1c3"
	I1115 11:44:57.478154  770629 cri.go:89] found id: "b6f31762d19d0b4a10be610346a7111f70357651c16509121df8b4ff7215a71f"
	I1115 11:44:57.478181  770629 cri.go:89] found id: "a4ec05da29c9d2dea7b01be81b7223bb05c63336ca59bfa38a4235ac8c2ea05f"
	I1115 11:44:57.478217  770629 cri.go:89] found id: ""
	I1115 11:44:57.478336  770629 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:44:57.489617  770629 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:44:57Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:44:57.489769  770629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:44:57.497837  770629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:44:57.497904  770629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:44:57.497996  770629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:44:57.505475  770629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:44:57.506447  770629 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-872969" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:44:57.507341  770629 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-872969" cluster setting kubeconfig missing "old-k8s-version-872969" context setting]
	I1115 11:44:57.508056  770629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:57.510037  770629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:44:57.520813  770629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 11:44:57.520849  770629 kubeadm.go:602] duration metric: took 22.926289ms to restartPrimaryControlPlane
	I1115 11:44:57.520894  770629 kubeadm.go:403] duration metric: took 118.739965ms to StartCluster
	I1115 11:44:57.520912  770629 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:57.520980  770629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:44:57.521898  770629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:57.522103  770629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:44:57.522419  770629 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:44:57.522465  770629 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:44:57.522570  770629 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-872969"
	I1115 11:44:57.522591  770629 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-872969"
	W1115 11:44:57.522597  770629 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:44:57.522620  770629 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:57.523067  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.523237  770629 addons.go:70] Setting dashboard=true in profile "old-k8s-version-872969"
	I1115 11:44:57.523259  770629 addons.go:239] Setting addon dashboard=true in "old-k8s-version-872969"
	W1115 11:44:57.523279  770629 addons.go:248] addon dashboard should already be in state true
	I1115 11:44:57.523299  770629 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:57.523681  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.525566  770629 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-872969"
	I1115 11:44:57.525598  770629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-872969"
	I1115 11:44:57.525909  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.527991  770629 out.go:179] * Verifying Kubernetes components...
	I1115 11:44:57.531136  770629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:44:57.564179  770629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:44:57.567268  770629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:44:57.567291  770629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:44:57.567362  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:57.588934  770629 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 11:44:57.591961  770629 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:44:57.594883  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:44:57.594907  770629 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:44:57.594970  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:57.595600  770629 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-872969"
	W1115 11:44:57.595614  770629 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:44:57.595640  770629 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:57.596046  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.634385  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:57.649602  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:57.659707  770629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:44:57.659733  770629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:44:57.659795  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:57.683552  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:57.888699  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:44:57.888723  770629 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:44:57.890522  770629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:44:57.922796  770629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:44:57.934162  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:44:57.934235  770629 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:44:57.958111  770629 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-872969" to be "Ready" ...
	I1115 11:44:57.995972  770629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:44:58.017411  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:44:58.017477  770629 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:44:58.091482  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:44:58.091556  770629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:44:58.170864  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:44:58.170934  770629 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:44:58.236171  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:44:58.236245  770629 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:44:58.265511  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:44:58.265593  770629 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:44:58.285579  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:44:58.285651  770629 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:44:58.311351  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:44:58.311417  770629 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:44:58.329562  770629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:45:01.981600  770629 node_ready.go:49] node "old-k8s-version-872969" is "Ready"
	I1115 11:45:01.981626  770629 node_ready.go:38] duration metric: took 4.023434379s for node "old-k8s-version-872969" to be "Ready" ...
	I1115 11:45:01.981639  770629 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:45:01.981716  770629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:45:03.783263  770629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.860396919s)
	I1115 11:45:03.783320  770629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.787285389s)
	I1115 11:45:04.695222  770629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.365604868s)
	I1115 11:45:04.695328  770629 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.713600925s)
	I1115 11:45:04.695423  770629 api_server.go:72] duration metric: took 7.173284079s to wait for apiserver process to appear ...
	I1115 11:45:04.695430  770629 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:45:04.695450  770629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:45:04.699430  770629 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-872969 addons enable metrics-server
	
	I1115 11:45:04.702410  770629 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 11:45:04.705842  770629 addons.go:515] duration metric: took 7.183368195s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 11:45:04.706762  770629 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 11:45:04.708198  770629 api_server.go:141] control plane version: v1.28.0
	I1115 11:45:04.708238  770629 api_server.go:131] duration metric: took 12.792841ms to wait for apiserver health ...
	I1115 11:45:04.708249  770629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:45:04.715274  770629 system_pods.go:59] 8 kube-system pods found
	I1115 11:45:04.715337  770629 system_pods.go:61] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:45:04.715351  770629 system_pods.go:61] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:45:04.715364  770629 system_pods.go:61] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:45:04.715377  770629 system_pods.go:61] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:45:04.715392  770629 system_pods.go:61] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:45:04.715421  770629 system_pods.go:61] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:45:04.715434  770629 system_pods.go:61] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:45:04.715439  770629 system_pods.go:61] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Running
	I1115 11:45:04.715458  770629 system_pods.go:74] duration metric: took 7.19915ms to wait for pod list to return data ...
	I1115 11:45:04.715470  770629 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:45:04.720014  770629 default_sa.go:45] found service account: "default"
	I1115 11:45:04.720044  770629 default_sa.go:55] duration metric: took 4.567054ms for default service account to be created ...
	I1115 11:45:04.720054  770629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:45:04.726587  770629 system_pods.go:86] 8 kube-system pods found
	I1115 11:45:04.726619  770629 system_pods.go:89] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:45:04.726649  770629 system_pods.go:89] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:45:04.726660  770629 system_pods.go:89] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:45:04.726668  770629 system_pods.go:89] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:45:04.726678  770629 system_pods.go:89] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:45:04.726684  770629 system_pods.go:89] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:45:04.726691  770629 system_pods.go:89] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:45:04.726697  770629 system_pods.go:89] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Running
	I1115 11:45:04.726714  770629 system_pods.go:126] duration metric: took 6.644633ms to wait for k8s-apps to be running ...
	I1115 11:45:04.726727  770629 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:45:04.726794  770629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:45:04.741475  770629 system_svc.go:56] duration metric: took 14.737497ms WaitForService to wait for kubelet
	I1115 11:45:04.741549  770629 kubeadm.go:587] duration metric: took 7.219408083s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:45:04.741585  770629 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:45:04.752656  770629 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:45:04.752691  770629 node_conditions.go:123] node cpu capacity is 2
	I1115 11:45:04.752705  770629 node_conditions.go:105] duration metric: took 11.101688ms to run NodePressure ...
	I1115 11:45:04.752718  770629 start.go:242] waiting for startup goroutines ...
	I1115 11:45:04.752726  770629 start.go:247] waiting for cluster config update ...
	I1115 11:45:04.752738  770629 start.go:256] writing updated cluster config ...
	I1115 11:45:04.753057  770629 ssh_runner.go:195] Run: rm -f paused
	I1115 11:45:04.757291  770629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:45:04.771466  770629 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-rndhq" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:45:06.777290  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:08.778080  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:11.277493  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:13.776891  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:15.779364  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:18.278952  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:20.778535  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:23.280666  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:25.777136  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:27.778114  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:30.277697  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:32.777671  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:35.278639  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	I1115 11:45:35.778058  770629 pod_ready.go:94] pod "coredns-5dd5756b68-rndhq" is "Ready"
	I1115 11:45:35.778087  770629 pod_ready.go:86] duration metric: took 31.006590507s for pod "coredns-5dd5756b68-rndhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.781429  770629 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.787321  770629 pod_ready.go:94] pod "etcd-old-k8s-version-872969" is "Ready"
	I1115 11:45:35.787350  770629 pod_ready.go:86] duration metric: took 5.897235ms for pod "etcd-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.790509  770629 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.795731  770629 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-872969" is "Ready"
	I1115 11:45:35.795809  770629 pod_ready.go:86] duration metric: took 5.26919ms for pod "kube-apiserver-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.798877  770629 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.975518  770629 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-872969" is "Ready"
	I1115 11:45:35.975546  770629 pod_ready.go:86] duration metric: took 176.62724ms for pod "kube-controller-manager-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:36.176573  770629 pod_ready.go:83] waiting for pod "kube-proxy-tgrgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:36.575781  770629 pod_ready.go:94] pod "kube-proxy-tgrgq" is "Ready"
	I1115 11:45:36.575857  770629 pod_ready.go:86] duration metric: took 399.254789ms for pod "kube-proxy-tgrgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:36.776596  770629 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:37.175612  770629 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-872969" is "Ready"
	I1115 11:45:37.175649  770629 pod_ready.go:86] duration metric: took 399.027848ms for pod "kube-scheduler-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:37.175661  770629 pod_ready.go:40] duration metric: took 32.418322048s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:45:37.234476  770629 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1115 11:45:37.237857  770629 out.go:203] 
	W1115 11:45:37.240746  770629 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 11:45:37.243697  770629 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 11:45:37.246822  770629 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-872969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.794086241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.803816529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.804438387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.822312927Z" level=info msg="Created container 68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs/dashboard-metrics-scraper" id=c8f4afc7-d691-483c-9029-3b95506bd8ca name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.823347039Z" level=info msg="Starting container: 68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2" id=64cb7789-8630-4fcc-b0b4-0bc32a24eea0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.825464398Z" level=info msg="Started container" PID=1665 containerID=68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs/dashboard-metrics-scraper id=64cb7789-8630-4fcc-b0b4-0bc32a24eea0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3be693604f6b24026357f2fea53a4d46cc02f63e41ca8be7db54d6cf15f23408
	Nov 15 11:45:37 old-k8s-version-872969 conmon[1663]: conmon 68bb5d486b2287908b0f <ninfo>: container 1665 exited with status 1
	Nov 15 11:45:38 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.999790021Z" level=info msg="Removing container: 2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867" id=c9ed08c4-85b3-4b81-b40d-82f69b59bd05 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:45:38 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:38.012768298Z" level=info msg="Error loading conmon cgroup of container 2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867: cgroup deleted" id=c9ed08c4-85b3-4b81-b40d-82f69b59bd05 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:45:38 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:38.017238324Z" level=info msg="Removed container 2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs/dashboard-metrics-scraper" id=c9ed08c4-85b3-4b81-b40d-82f69b59bd05 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.70783203Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.71578491Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.715819979Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.715845432Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.719067517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.71910074Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.719124674Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.7225523Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.722586007Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.722609827Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.725691824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.725833528Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.725869746Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.729121706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.729153387Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	68bb5d486b228       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   3be693604f6b2       dashboard-metrics-scraper-5f989dc9cf-s57fs       kubernetes-dashboard
	02d2b5f4938e4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   6bdfeb259c748       storage-provisioner                              kube-system
	db82c7b1afbcf       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   29 seconds ago      Running             kubernetes-dashboard        0                   080c8b6ffe429       kubernetes-dashboard-8694d4445c-9xc5k            kubernetes-dashboard
	804eef2c91baf       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   3c0eddbced8b4       coredns-5dd5756b68-rndhq                         kube-system
	ac16149e58273       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   47c604966ac05       kindnet-zmkg5                                    kube-system
	c918d4e74a7df       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   6bdfeb259c748       storage-provisioner                              kube-system
	d454980894d97       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   a5124b1da49b6       busybox                                          default
	32c5d8ff7931f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   6c4b51b39c069       kube-proxy-tgrgq                                 kube-system
	7aa6ea3c1cb8b       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   2be490bfc7af2       kube-scheduler-old-k8s-version-872969            kube-system
	14b3db3b917aa       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   2d41cc61eec1b       kube-apiserver-old-k8s-version-872969            kube-system
	b6f31762d19d0       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   f911225773966       kube-controller-manager-old-k8s-version-872969   kube-system
	a4ec05da29c9d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   93ba964508ada       etcd-old-k8s-version-872969                      kube-system
	
	
	==> coredns [804eef2c91baf9184c8c7c9e054bbd57aa28d6dd39d97e7b71f4fb811d18ba99] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34146 - 28918 "HINFO IN 4976347791005943752.2199682188033655270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023198201s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-872969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-872969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=old-k8s-version-872969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_43_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:43:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-872969
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:45:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:44:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-872969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                3b68266a-d7a6-4882-86de-e8553ea8772d
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-rndhq                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-872969                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-zmkg5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-872969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-872969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-tgrgq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-872969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-s57fs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-9xc5k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-872969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-872969 event: Registered Node old-k8s-version-872969 in Controller
	  Normal  NodeReady                90s                kubelet          Node old-k8s-version-872969 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-872969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-872969 event: Registered Node old-k8s-version-872969 in Controller
	
	
	==> dmesg <==
	[Nov15 11:18] overlayfs: idmapped layers are currently not supported
	[Nov15 11:22] overlayfs: idmapped layers are currently not supported
	[Nov15 11:23] overlayfs: idmapped layers are currently not supported
	[Nov15 11:24] overlayfs: idmapped layers are currently not supported
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a4ec05da29c9d2dea7b01be81b7223bb05c63336ca59bfa38a4235ac8c2ea05f] <==
	{"level":"info","ts":"2025-11-15T11:44:57.785624Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T11:44:57.785663Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T11:44:57.786011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-15T11:44:57.786471Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-15T11:44:57.786634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T11:44:57.786695Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T11:44:57.788244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-15T11:44:57.788522Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T11:44:57.788589Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T11:44:57.78873Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:44:57.789021Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:44:59.054212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-15T11:44:59.05432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-15T11:44:59.054375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T11:44:59.054414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.054447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.054483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.054512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.057058Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-872969 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T11:44:59.057252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T11:44:59.058337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T11:44:59.058746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T11:44:59.060219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T11:44:59.060329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T11:44:59.064335Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:45:53 up  3:28,  0 user,  load average: 1.96, 3.06, 2.68
	Linux old-k8s-version-872969 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac16149e5827319ea9d87af9b598f570997a123f2b974665d5b967913bb2c2fe] <==
	I1115 11:45:03.551475       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:45:03.551704       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:45:03.551835       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:45:03.551846       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:45:03.551856       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:45:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:45:03.702662       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:45:03.702934       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:45:03.702980       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:45:03.703764       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:45:33.703597       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:45:33.703885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:45:33.703970       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:45:33.704049       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1115 11:45:35.303996       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:45:35.304029       1 metrics.go:72] Registering metrics
	I1115 11:45:35.304087       1 controller.go:711] "Syncing nftables rules"
	I1115 11:45:43.706729       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:45:43.706846       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14b3db3b917aa89b721a2b8851a6103f8c835a7cdcbc14441bc08d8ffa25f1c3] <==
	I1115 11:45:02.039541       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:45:02.060587       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:45:02.064548       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 11:45:02.064588       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 11:45:02.070389       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1115 11:45:02.064609       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1115 11:45:02.071416       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1115 11:45:02.088966       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1115 11:45:02.140649       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:45:02.797558       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:45:04.342441       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 11:45:04.451324       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 11:45:04.511874       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:45:04.545327       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:45:04.562665       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 11:45:04.654458       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.164.137"}
	I1115 11:45:04.683241       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.20.140"}
	E1115 11:45:12.072502       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I1115 11:45:14.456324       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 11:45:14.551907       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 11:45:14.570979       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1115 11:45:22.073945       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1115 11:45:32.074918       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1115 11:45:42.075934       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1115 11:45:52.076451       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [b6f31762d19d0b4a10be610346a7111f70357651c16509121df8b4ff7215a71f] <==
	I1115 11:45:14.538225       1 event.go:307] "Event occurred" object="old-k8s-version-872969" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-872969 event: Registered Node old-k8s-version-872969 in Controller"
	I1115 11:45:14.538529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.697241ms"
	I1115 11:45:14.538870       1 shared_informer.go:318] Caches are synced for TTL
	I1115 11:45:14.549063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.017577ms"
	I1115 11:45:14.571388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.200592ms"
	I1115 11:45:14.571549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.073µs"
	I1115 11:45:14.591285       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1115 11:45:14.599087       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 11:45:14.599646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.039797ms"
	I1115 11:45:14.606981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.139µs"
	I1115 11:45:14.614291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.536698ms"
	I1115 11:45:14.614506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.242µs"
	I1115 11:45:14.636467       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 11:45:14.992793       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 11:45:15.005589       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 11:45:15.005634       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 11:45:19.972944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="125.482µs"
	I1115 11:45:20.978579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.84µs"
	I1115 11:45:21.972508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.782µs"
	I1115 11:45:24.989929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.396799ms"
	I1115 11:45:24.990122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.568µs"
	I1115 11:45:35.650992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.678669ms"
	I1115 11:45:35.651093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.569µs"
	I1115 11:45:38.026135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.096µs"
	I1115 11:45:44.861777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.81µs"
	
	
	==> kube-proxy [32c5d8ff7931f51b39e4d677f3fa8990d64a8ce4f501f08b17cffa0a306cd10b] <==
	I1115 11:45:03.704990       1 server_others.go:69] "Using iptables proxy"
	I1115 11:45:03.751307       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 11:45:04.342270       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:45:04.345308       1 server_others.go:152] "Using iptables Proxier"
	I1115 11:45:04.345402       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 11:45:04.345435       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 11:45:04.345495       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 11:45:04.345723       1 server.go:846] "Version info" version="v1.28.0"
	I1115 11:45:04.345935       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:45:04.346624       1 config.go:188] "Starting service config controller"
	I1115 11:45:04.346692       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 11:45:04.346741       1 config.go:97] "Starting endpoint slice config controller"
	I1115 11:45:04.346769       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 11:45:04.347258       1 config.go:315] "Starting node config controller"
	I1115 11:45:04.347311       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 11:45:04.446995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 11:45:04.462788       1 shared_informer.go:318] Caches are synced for service config
	I1115 11:45:04.462858       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7aa6ea3c1cb8bb9a148b615de44117cceecc46195098acef3b88d91075fe34dc] <==
	I1115 11:45:02.363200       1 serving.go:348] Generated self-signed cert in-memory
	I1115 11:45:04.717762       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 11:45:04.718704       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:45:04.727021       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 11:45:04.727397       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1115 11:45:04.727443       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1115 11:45:04.727483       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 11:45:04.734669       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:45:04.734760       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 11:45:04.734806       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:45:04.734835       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1115 11:45:04.827517       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1115 11:45:04.835037       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1115 11:45:04.835040       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672418     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4d1ca727-bfad-4baa-95c1-8bdb23a987a4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-9xc5k\" (UID: \"4d1ca727-bfad-4baa-95c1-8bdb23a987a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9xc5k"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672481     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aab34a0a-0d02-4365-8701-3261373ad53a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-s57fs\" (UID: \"aab34a0a-0d02-4365-8701-3261373ad53a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672513     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6zj\" (UniqueName: \"kubernetes.io/projected/aab34a0a-0d02-4365-8701-3261373ad53a-kube-api-access-mf6zj\") pod \"dashboard-metrics-scraper-5f989dc9cf-s57fs\" (UID: \"aab34a0a-0d02-4365-8701-3261373ad53a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672551     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xwp\" (UniqueName: \"kubernetes.io/projected/4d1ca727-bfad-4baa-95c1-8bdb23a987a4-kube-api-access-49xwp\") pod \"kubernetes-dashboard-8694d4445c-9xc5k\" (UID: \"4d1ca727-bfad-4baa-95c1-8bdb23a987a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9xc5k"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: W1115 11:45:14.885988     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/crio-080c8b6ffe429a9f60eec51731210e1224213e5f0257265f9a1f2ef89a46dc4d WatchSource:0}: Error finding container 080c8b6ffe429a9f60eec51731210e1224213e5f0257265f9a1f2ef89a46dc4d: Status 404 returned error can't find the container with id 080c8b6ffe429a9f60eec51731210e1224213e5f0257265f9a1f2ef89a46dc4d
	Nov 15 11:45:19 old-k8s-version-872969 kubelet[774]: I1115 11:45:19.943277     774 scope.go:117] "RemoveContainer" containerID="c04bcb9a98d8b966faee98af0e017683131f1b10575a5ee2ec3406b6f152a1e5"
	Nov 15 11:45:20 old-k8s-version-872969 kubelet[774]: I1115 11:45:20.948292     774 scope.go:117] "RemoveContainer" containerID="c04bcb9a98d8b966faee98af0e017683131f1b10575a5ee2ec3406b6f152a1e5"
	Nov 15 11:45:20 old-k8s-version-872969 kubelet[774]: I1115 11:45:20.948578     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:20 old-k8s-version-872969 kubelet[774]: E1115 11:45:20.948961     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:21 old-k8s-version-872969 kubelet[774]: I1115 11:45:21.951618     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:21 old-k8s-version-872969 kubelet[774]: E1115 11:45:21.951905     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:24 old-k8s-version-872969 kubelet[774]: I1115 11:45:24.847020     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:24 old-k8s-version-872969 kubelet[774]: E1115 11:45:24.847331     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:33 old-k8s-version-872969 kubelet[774]: I1115 11:45:33.984053     774 scope.go:117] "RemoveContainer" containerID="c918d4e74a7df8005e44d4f479b40a185931fc4b765b4b81137bbcdb61810076"
	Nov 15 11:45:34 old-k8s-version-872969 kubelet[774]: I1115 11:45:34.009613     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9xc5k" podStartSLOduration=10.75295355 podCreationTimestamp="2025-11-15 11:45:14 +0000 UTC" firstStartedPulling="2025-11-15 11:45:14.891387796 +0000 UTC m=+18.285776196" lastFinishedPulling="2025-11-15 11:45:24.14796603 +0000 UTC m=+27.542354429" observedRunningTime="2025-11-15 11:45:24.977623204 +0000 UTC m=+28.372011612" watchObservedRunningTime="2025-11-15 11:45:34.009531783 +0000 UTC m=+37.403920191"
	Nov 15 11:45:37 old-k8s-version-872969 kubelet[774]: I1115 11:45:37.789650     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:38 old-k8s-version-872969 kubelet[774]: I1115 11:45:37.997525     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:38 old-k8s-version-872969 kubelet[774]: I1115 11:45:37.997748     774 scope.go:117] "RemoveContainer" containerID="68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2"
	Nov 15 11:45:38 old-k8s-version-872969 kubelet[774]: E1115 11:45:37.998059     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:44 old-k8s-version-872969 kubelet[774]: I1115 11:45:44.846803     774 scope.go:117] "RemoveContainer" containerID="68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2"
	Nov 15 11:45:44 old-k8s-version-872969 kubelet[774]: E1115 11:45:44.847111     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:50 old-k8s-version-872969 kubelet[774]: I1115 11:45:50.600232     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 15 11:45:50 old-k8s-version-872969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:45:50 old-k8s-version-872969 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:45:50 old-k8s-version-872969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [db82c7b1afbcf518b92148f64445ec7c70f683303623edfa8dd13a0497384658] <==
	2025/11/15 11:45:24 Starting overwatch
	2025/11/15 11:45:24 Using namespace: kubernetes-dashboard
	2025/11/15 11:45:24 Using in-cluster config to connect to apiserver
	2025/11/15 11:45:24 Using secret token for csrf signing
	2025/11/15 11:45:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:45:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:45:24 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 11:45:24 Generating JWE encryption key
	2025/11/15 11:45:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:45:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:45:24 Initializing JWE encryption key from synchronized object
	2025/11/15 11:45:24 Creating in-cluster Sidecar client
	2025/11/15 11:45:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:45:24 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [02d2b5f4938e4f727c9ac593d2f34708d74a396c7d94efc50fd6294d92974c8a] <==
	I1115 11:45:34.041534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:45:34.057244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:45:34.057382       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 11:45:51.455022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:45:51.455195       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-872969_cf84b641-a354-4cdf-8aec-4bf68893efa7!
	I1115 11:45:51.456085       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81006117-e52a-4d02-8262-09cc8cbb9b80", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-872969_cf84b641-a354-4cdf-8aec-4bf68893efa7 became leader
	I1115 11:45:51.556049       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-872969_cf84b641-a354-4cdf-8aec-4bf68893efa7!
	
	
	==> storage-provisioner [c918d4e74a7df8005e44d4f479b40a185931fc4b765b4b81137bbcdb61810076] <==
	I1115 11:45:03.418507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:45:33.420219       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-872969 -n old-k8s-version-872969
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-872969 -n old-k8s-version-872969: exit status 2 (366.642037ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-872969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-872969
helpers_test.go:243: (dbg) docker inspect old-k8s-version-872969:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80",
	        "Created": "2025-11-15T11:43:29.514556564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 770756,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:44:49.619390047Z",
	            "FinishedAt": "2025-11-15T11:44:48.764971449Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/hostname",
	        "HostsPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/hosts",
	        "LogPath": "/var/lib/docker/containers/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80-json.log",
	        "Name": "/old-k8s-version-872969",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-872969:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-872969",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80",
	                "LowerDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d28583d9fd967090aec67f47dcc0a8108b77dda2eb9d81dce80920e8f83075ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-872969",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-872969/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-872969",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-872969",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-872969",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0414405e20a115ccf2cbb5a5ec547187ddeb06230af5e0bce656cafb5dcaa07d",
	            "SandboxKey": "/var/run/docker/netns/0414405e20a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-872969": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:d5:e5:ac:0c:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fe74aaea9f1ff898d8b3c6c329ef26fb68a67a4e5377e568964777357f485456",
	                    "EndpointID": "d18e01abc72b7896dee36c7182429028ddf2de9f688cddf521283e737abb492e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-872969",
	                        "661ed5bad40f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969: exit status 2 (368.621506ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-872969 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-872969 logs -n 25: (1.357104244s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-949287 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo containerd config dump                                                                                                                                                                                                  │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo crio config                                                                                                                                                                                                             │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ delete  │ -p cilium-949287                                                                                                                                                                                                                              │ cilium-949287             │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:41 UTC │
	│ start   │ -p force-systemd-env-386707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-386707  │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:42 UTC │
	│ delete  │ -p kubernetes-upgrade-436490                                                                                                                                                                                                                  │ kubernetes-upgrade-436490 │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-636406    │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p force-systemd-env-386707                                                                                                                                                                                                                   │ force-systemd-env-386707  │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-options-303284 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ cert-options-303284 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ -p cert-options-303284 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p cert-options-303284                                                                                                                                                                                                                        │ cert-options-303284       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │                     │
	│ stop    │ -p old-k8s-version-872969 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-872969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969    │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:44:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:44:49.333092  770629 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:44:49.333260  770629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:44:49.333291  770629 out.go:374] Setting ErrFile to fd 2...
	I1115 11:44:49.333310  770629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:44:49.333602  770629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:44:49.334018  770629 out.go:368] Setting JSON to false
	I1115 11:44:49.334948  770629 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12440,"bootTime":1763194649,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:44:49.335048  770629 start.go:143] virtualization:  
	I1115 11:44:49.338709  770629 out.go:179] * [old-k8s-version-872969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:44:49.342864  770629 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:44:49.342978  770629 notify.go:221] Checking for updates...
	I1115 11:44:49.348939  770629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:44:49.351816  770629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:44:49.354696  770629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:44:49.357547  770629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:44:49.360413  770629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:44:49.363816  770629 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:44:49.367311  770629 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1115 11:44:49.370126  770629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:44:49.400684  770629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:44:49.400839  770629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:44:49.465656  770629 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:44:49.455604735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:44:49.465776  770629 docker.go:319] overlay module found
	I1115 11:44:49.468917  770629 out.go:179] * Using the docker driver based on existing profile
	I1115 11:44:49.471718  770629 start.go:309] selected driver: docker
	I1115 11:44:49.471734  770629 start.go:930] validating driver "docker" against &{Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:44:49.471847  770629 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:44:49.472589  770629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:44:49.531888  770629 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:44:49.522191791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:44:49.532234  770629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:44:49.532268  770629 cni.go:84] Creating CNI manager for ""
	I1115 11:44:49.532326  770629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:44:49.532366  770629 start.go:353] cluster config:
	{Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:44:49.537383  770629 out.go:179] * Starting "old-k8s-version-872969" primary control-plane node in "old-k8s-version-872969" cluster
	I1115 11:44:49.540209  770629 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:44:49.543129  770629 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:44:49.545941  770629 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 11:44:49.545990  770629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 11:44:49.546015  770629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:44:49.546020  770629 cache.go:65] Caching tarball of preloaded images
	I1115 11:44:49.546108  770629 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:44:49.546117  770629 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 11:44:49.546226  770629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/config.json ...
	I1115 11:44:49.565710  770629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:44:49.565731  770629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:44:49.565744  770629 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:44:49.565769  770629 start.go:360] acquireMachinesLock for old-k8s-version-872969: {Name:mk8e7def530b80cef5a2809f08776681cf0304db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:44:49.565829  770629 start.go:364] duration metric: took 36.325µs to acquireMachinesLock for "old-k8s-version-872969"
	I1115 11:44:49.565852  770629 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:44:49.565858  770629 fix.go:54] fixHost starting: 
	I1115 11:44:49.566123  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:49.584666  770629 fix.go:112] recreateIfNeeded on old-k8s-version-872969: state=Stopped err=<nil>
	W1115 11:44:49.584695  770629 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:44:49.587907  770629 out.go:252] * Restarting existing docker container for "old-k8s-version-872969" ...
	I1115 11:44:49.587999  770629 cli_runner.go:164] Run: docker start old-k8s-version-872969
	I1115 11:44:49.868388  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:49.888723  770629 kic.go:430] container "old-k8s-version-872969" state is running.
	I1115 11:44:49.889174  770629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:44:49.911307  770629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/config.json ...
	I1115 11:44:49.911527  770629 machine.go:94] provisionDockerMachine start ...
	I1115 11:44:49.911594  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:49.933303  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:49.933627  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:49.933636  770629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:44:49.934430  770629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:44:53.096551  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-872969
	
	I1115 11:44:53.096575  770629 ubuntu.go:182] provisioning hostname "old-k8s-version-872969"
	I1115 11:44:53.096641  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:53.114684  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:53.115008  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:53.115025  770629 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-872969 && echo "old-k8s-version-872969" | sudo tee /etc/hostname
	I1115 11:44:53.282908  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-872969
	
	I1115 11:44:53.282987  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:53.305094  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:53.305403  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:53.305425  770629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-872969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-872969/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-872969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:44:53.456951  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:44:53.456975  770629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:44:53.456996  770629 ubuntu.go:190] setting up certificates
	I1115 11:44:53.457006  770629 provision.go:84] configureAuth start
	I1115 11:44:53.457080  770629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:44:53.473443  770629 provision.go:143] copyHostCerts
	I1115 11:44:53.473510  770629 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:44:53.473541  770629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:44:53.473622  770629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:44:53.473728  770629 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:44:53.473738  770629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:44:53.473765  770629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:44:53.473828  770629 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:44:53.473836  770629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:44:53.473860  770629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:44:53.473913  770629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-872969 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-872969]
	I1115 11:44:54.042162  770629 provision.go:177] copyRemoteCerts
	I1115 11:44:54.042232  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:44:54.042287  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.059987  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.165561  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:44:54.184515  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:44:54.202251  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 11:44:54.219770  770629 provision.go:87] duration metric: took 762.746321ms to configureAuth
	I1115 11:44:54.219795  770629 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:44:54.220027  770629 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:44:54.220135  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.241166  770629 main.go:143] libmachine: Using SSH client type: native
	I1115 11:44:54.241480  770629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 11:44:54.241499  770629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:44:54.565837  770629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:44:54.565875  770629 machine.go:97] duration metric: took 4.654329647s to provisionDockerMachine
	I1115 11:44:54.565888  770629 start.go:293] postStartSetup for "old-k8s-version-872969" (driver="docker")
	I1115 11:44:54.565898  770629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:44:54.565960  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:44:54.566008  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.586235  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.699004  770629 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:44:54.703163  770629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:44:54.703194  770629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:44:54.703206  770629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:44:54.703259  770629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:44:54.703351  770629 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:44:54.703458  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:44:54.712651  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:44:54.731751  770629 start.go:296] duration metric: took 165.847117ms for postStartSetup
	I1115 11:44:54.731829  770629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:44:54.731874  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.756093  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.857832  770629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:44:54.862340  770629 fix.go:56] duration metric: took 5.296474151s for fixHost
	I1115 11:44:54.862362  770629 start.go:83] releasing machines lock for "old-k8s-version-872969", held for 5.296521692s
	I1115 11:44:54.862427  770629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-872969
	I1115 11:44:54.879098  770629 ssh_runner.go:195] Run: cat /version.json
	I1115 11:44:54.879165  770629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:44:54.879225  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.879183  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:54.905037  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:54.907862  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:55.107442  770629 ssh_runner.go:195] Run: systemctl --version
	I1115 11:44:55.114562  770629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:44:55.151283  770629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:44:55.155948  770629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:44:55.156036  770629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:44:55.163961  770629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:44:55.164028  770629 start.go:496] detecting cgroup driver to use...
	I1115 11:44:55.164076  770629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:44:55.164157  770629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:44:55.179127  770629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:44:55.192731  770629 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:44:55.192836  770629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:44:55.209195  770629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:44:55.223930  770629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:44:55.347381  770629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:44:55.467933  770629 docker.go:234] disabling docker service ...
	I1115 11:44:55.468065  770629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:44:55.485113  770629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:44:55.500411  770629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:44:55.627149  770629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:44:55.753742  770629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:44:55.766823  770629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:44:55.780490  770629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 11:44:55.780575  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.789454  770629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:44:55.789593  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.798976  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.807563  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.816271  770629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:44:55.824293  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.833014  770629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.841712  770629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:44:55.850263  770629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:44:55.857538  770629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:44:55.864850  770629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:44:55.983096  770629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:44:56.125340  770629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:44:56.125413  770629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:44:56.129485  770629 start.go:564] Will wait 60s for crictl version
	I1115 11:44:56.129549  770629 ssh_runner.go:195] Run: which crictl
	I1115 11:44:56.133545  770629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:44:56.162386  770629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:44:56.162494  770629 ssh_runner.go:195] Run: crio --version
	I1115 11:44:56.190460  770629 ssh_runner.go:195] Run: crio --version
	I1115 11:44:56.223914  770629 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 11:44:56.226681  770629 cli_runner.go:164] Run: docker network inspect old-k8s-version-872969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:44:56.242406  770629 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:44:56.246458  770629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:44:56.255895  770629 kubeadm.go:884] updating cluster {Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:44:56.256012  770629 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 11:44:56.256067  770629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:44:56.289772  770629 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:44:56.289792  770629 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:44:56.289855  770629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:44:56.318244  770629 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:44:56.318268  770629 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:44:56.318275  770629 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 11:44:56.318377  770629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-872969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:44:56.318458  770629 ssh_runner.go:195] Run: crio config
	I1115 11:44:56.396507  770629 cni.go:84] Creating CNI manager for ""
	I1115 11:44:56.396571  770629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:44:56.396624  770629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:44:56.396675  770629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-872969 NodeName:old-k8s-version-872969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:44:56.396911  770629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-872969"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:44:56.397017  770629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 11:44:56.404672  770629 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:44:56.404738  770629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:44:56.411978  770629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1115 11:44:56.424420  770629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:44:56.439553  770629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1115 11:44:56.452009  770629 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:44:56.455855  770629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:44:56.465901  770629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:44:56.586871  770629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:44:56.608556  770629 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969 for IP: 192.168.85.2
	I1115 11:44:56.608618  770629 certs.go:195] generating shared ca certs ...
	I1115 11:44:56.608647  770629 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:56.608826  770629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:44:56.608964  770629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:44:56.608993  770629 certs.go:257] generating profile certs ...
	I1115 11:44:56.609132  770629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.key
	I1115 11:44:56.609217  770629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key.5f4bae20
	I1115 11:44:56.609294  770629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key
	I1115 11:44:56.609454  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:44:56.609519  770629 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:44:56.609562  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:44:56.609610  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:44:56.609649  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:44:56.609701  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:44:56.609782  770629 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:44:56.610426  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:44:56.638083  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:44:56.659295  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:44:56.680809  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:44:56.708401  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 11:44:56.734153  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:44:56.762376  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:44:56.796717  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:44:56.819726  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:44:56.842201  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:44:56.861962  770629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:44:56.881525  770629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:44:56.895303  770629 ssh_runner.go:195] Run: openssl version
	I1115 11:44:56.902201  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:44:56.910299  770629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:44:56.913862  770629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:44:56.913948  770629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:44:56.956329  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:44:56.964512  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:44:56.972977  770629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:44:56.976680  770629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:44:56.976791  770629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:44:57.017926  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:44:57.026012  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:44:57.034279  770629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:44:57.037955  770629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:44:57.038052  770629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:44:57.086437  770629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:44:57.094622  770629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:44:57.098630  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:44:57.143364  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:44:57.186284  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:44:57.229823  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:44:57.281264  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:44:57.330469  770629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:44:57.402163  770629 kubeadm.go:401] StartCluster: {Name:old-k8s-version-872969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-872969 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:44:57.402300  770629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:44:57.402394  770629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:44:57.478070  770629 cri.go:89] found id: "7aa6ea3c1cb8bb9a148b615de44117cceecc46195098acef3b88d91075fe34dc"
	I1115 11:44:57.478136  770629 cri.go:89] found id: "14b3db3b917aa89b721a2b8851a6103f8c835a7cdcbc14441bc08d8ffa25f1c3"
	I1115 11:44:57.478154  770629 cri.go:89] found id: "b6f31762d19d0b4a10be610346a7111f70357651c16509121df8b4ff7215a71f"
	I1115 11:44:57.478181  770629 cri.go:89] found id: "a4ec05da29c9d2dea7b01be81b7223bb05c63336ca59bfa38a4235ac8c2ea05f"
	I1115 11:44:57.478217  770629 cri.go:89] found id: ""
	I1115 11:44:57.478336  770629 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:44:57.489617  770629 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:44:57Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:44:57.489769  770629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:44:57.497837  770629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:44:57.497904  770629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:44:57.497996  770629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:44:57.505475  770629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:44:57.506447  770629 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-872969" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:44:57.507341  770629 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-872969" cluster setting kubeconfig missing "old-k8s-version-872969" context setting]
	I1115 11:44:57.508056  770629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:57.510037  770629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:44:57.520813  770629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 11:44:57.520849  770629 kubeadm.go:602] duration metric: took 22.926289ms to restartPrimaryControlPlane
	I1115 11:44:57.520894  770629 kubeadm.go:403] duration metric: took 118.739965ms to StartCluster
	I1115 11:44:57.520912  770629 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:57.520980  770629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:44:57.521898  770629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:44:57.522103  770629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:44:57.522419  770629 config.go:182] Loaded profile config "old-k8s-version-872969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 11:44:57.522465  770629 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:44:57.522570  770629 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-872969"
	I1115 11:44:57.522591  770629 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-872969"
	W1115 11:44:57.522597  770629 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:44:57.522620  770629 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:57.523067  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.523237  770629 addons.go:70] Setting dashboard=true in profile "old-k8s-version-872969"
	I1115 11:44:57.523259  770629 addons.go:239] Setting addon dashboard=true in "old-k8s-version-872969"
	W1115 11:44:57.523279  770629 addons.go:248] addon dashboard should already be in state true
	I1115 11:44:57.523299  770629 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:57.523681  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.525566  770629 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-872969"
	I1115 11:44:57.525598  770629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-872969"
	I1115 11:44:57.525909  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.527991  770629 out.go:179] * Verifying Kubernetes components...
	I1115 11:44:57.531136  770629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:44:57.564179  770629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:44:57.567268  770629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:44:57.567291  770629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:44:57.567362  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:57.588934  770629 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 11:44:57.591961  770629 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:44:57.594883  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:44:57.594907  770629 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:44:57.594970  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:57.595600  770629 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-872969"
	W1115 11:44:57.595614  770629 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:44:57.595640  770629 host.go:66] Checking if "old-k8s-version-872969" exists ...
	I1115 11:44:57.596046  770629 cli_runner.go:164] Run: docker container inspect old-k8s-version-872969 --format={{.State.Status}}
	I1115 11:44:57.634385  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:57.649602  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:57.659707  770629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:44:57.659733  770629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:44:57.659795  770629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-872969
	I1115 11:44:57.683552  770629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/old-k8s-version-872969/id_rsa Username:docker}
	I1115 11:44:57.888699  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:44:57.888723  770629 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:44:57.890522  770629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:44:57.922796  770629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:44:57.934162  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:44:57.934235  770629 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:44:57.958111  770629 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-872969" to be "Ready" ...
	I1115 11:44:57.995972  770629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:44:58.017411  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:44:58.017477  770629 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:44:58.091482  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:44:58.091556  770629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:44:58.170864  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:44:58.170934  770629 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:44:58.236171  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:44:58.236245  770629 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:44:58.265511  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:44:58.265593  770629 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:44:58.285579  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:44:58.285651  770629 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:44:58.311351  770629 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:44:58.311417  770629 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:44:58.329562  770629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:45:01.981600  770629 node_ready.go:49] node "old-k8s-version-872969" is "Ready"
	I1115 11:45:01.981626  770629 node_ready.go:38] duration metric: took 4.023434379s for node "old-k8s-version-872969" to be "Ready" ...
	I1115 11:45:01.981639  770629 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:45:01.981716  770629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:45:03.783263  770629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.860396919s)
	I1115 11:45:03.783320  770629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.787285389s)
	I1115 11:45:04.695222  770629 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.365604868s)
	I1115 11:45:04.695328  770629 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.713600925s)
	I1115 11:45:04.695423  770629 api_server.go:72] duration metric: took 7.173284079s to wait for apiserver process to appear ...
	I1115 11:45:04.695430  770629 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:45:04.695450  770629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:45:04.699430  770629 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-872969 addons enable metrics-server
	
	I1115 11:45:04.702410  770629 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 11:45:04.705842  770629 addons.go:515] duration metric: took 7.183368195s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 11:45:04.706762  770629 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 11:45:04.708198  770629 api_server.go:141] control plane version: v1.28.0
	I1115 11:45:04.708238  770629 api_server.go:131] duration metric: took 12.792841ms to wait for apiserver health ...
	I1115 11:45:04.708249  770629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:45:04.715274  770629 system_pods.go:59] 8 kube-system pods found
	I1115 11:45:04.715337  770629 system_pods.go:61] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:45:04.715351  770629 system_pods.go:61] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:45:04.715364  770629 system_pods.go:61] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:45:04.715377  770629 system_pods.go:61] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:45:04.715392  770629 system_pods.go:61] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:45:04.715421  770629 system_pods.go:61] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:45:04.715434  770629 system_pods.go:61] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:45:04.715439  770629 system_pods.go:61] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Running
	I1115 11:45:04.715458  770629 system_pods.go:74] duration metric: took 7.19915ms to wait for pod list to return data ...
	I1115 11:45:04.715470  770629 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:45:04.720014  770629 default_sa.go:45] found service account: "default"
	I1115 11:45:04.720044  770629 default_sa.go:55] duration metric: took 4.567054ms for default service account to be created ...
	I1115 11:45:04.720054  770629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:45:04.726587  770629 system_pods.go:86] 8 kube-system pods found
	I1115 11:45:04.726619  770629 system_pods.go:89] "coredns-5dd5756b68-rndhq" [5de00329-d0e0-48be-9d3d-39b760cb0ea8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:45:04.726649  770629 system_pods.go:89] "etcd-old-k8s-version-872969" [4db73848-4c5c-4849-8a90-a3d6570064b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:45:04.726660  770629 system_pods.go:89] "kindnet-zmkg5" [623da114-560f-4888-a498-ef271e3da582] Running
	I1115 11:45:04.726668  770629 system_pods.go:89] "kube-apiserver-old-k8s-version-872969" [8adbb139-5c3b-4f75-983e-bf9010e0c46e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:45:04.726678  770629 system_pods.go:89] "kube-controller-manager-old-k8s-version-872969" [171522f5-2d0f-4cdd-aeca-e56a9ff15b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:45:04.726684  770629 system_pods.go:89] "kube-proxy-tgrgq" [f8984361-3dcd-41a6-bc3b-cd185d25b7b6] Running
	I1115 11:45:04.726691  770629 system_pods.go:89] "kube-scheduler-old-k8s-version-872969" [c2d07ad6-d326-4ade-8eb5-5002d24cc986] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:45:04.726697  770629 system_pods.go:89] "storage-provisioner" [ba1eb52a-c93b-4fbf-981e-58bf5de71141] Running
	I1115 11:45:04.726714  770629 system_pods.go:126] duration metric: took 6.644633ms to wait for k8s-apps to be running ...
	I1115 11:45:04.726727  770629 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:45:04.726794  770629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:45:04.741475  770629 system_svc.go:56] duration metric: took 14.737497ms WaitForService to wait for kubelet
	I1115 11:45:04.741549  770629 kubeadm.go:587] duration metric: took 7.219408083s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:45:04.741585  770629 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:45:04.752656  770629 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:45:04.752691  770629 node_conditions.go:123] node cpu capacity is 2
	I1115 11:45:04.752705  770629 node_conditions.go:105] duration metric: took 11.101688ms to run NodePressure ...
	I1115 11:45:04.752718  770629 start.go:242] waiting for startup goroutines ...
	I1115 11:45:04.752726  770629 start.go:247] waiting for cluster config update ...
	I1115 11:45:04.752738  770629 start.go:256] writing updated cluster config ...
	I1115 11:45:04.753057  770629 ssh_runner.go:195] Run: rm -f paused
	I1115 11:45:04.757291  770629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:45:04.771466  770629 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-rndhq" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:45:06.777290  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:08.778080  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:11.277493  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:13.776891  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:15.779364  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:18.278952  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:20.778535  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:23.280666  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:25.777136  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:27.778114  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:30.277697  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:32.777671  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	W1115 11:45:35.278639  770629 pod_ready.go:104] pod "coredns-5dd5756b68-rndhq" is not "Ready", error: <nil>
	I1115 11:45:35.778058  770629 pod_ready.go:94] pod "coredns-5dd5756b68-rndhq" is "Ready"
	I1115 11:45:35.778087  770629 pod_ready.go:86] duration metric: took 31.006590507s for pod "coredns-5dd5756b68-rndhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.781429  770629 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.787321  770629 pod_ready.go:94] pod "etcd-old-k8s-version-872969" is "Ready"
	I1115 11:45:35.787350  770629 pod_ready.go:86] duration metric: took 5.897235ms for pod "etcd-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.790509  770629 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.795731  770629 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-872969" is "Ready"
	I1115 11:45:35.795809  770629 pod_ready.go:86] duration metric: took 5.26919ms for pod "kube-apiserver-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.798877  770629 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:35.975518  770629 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-872969" is "Ready"
	I1115 11:45:35.975546  770629 pod_ready.go:86] duration metric: took 176.62724ms for pod "kube-controller-manager-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:36.176573  770629 pod_ready.go:83] waiting for pod "kube-proxy-tgrgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:36.575781  770629 pod_ready.go:94] pod "kube-proxy-tgrgq" is "Ready"
	I1115 11:45:36.575857  770629 pod_ready.go:86] duration metric: took 399.254789ms for pod "kube-proxy-tgrgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:36.776596  770629 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:37.175612  770629 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-872969" is "Ready"
	I1115 11:45:37.175649  770629 pod_ready.go:86] duration metric: took 399.027848ms for pod "kube-scheduler-old-k8s-version-872969" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:45:37.175661  770629 pod_ready.go:40] duration metric: took 32.418322048s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:45:37.234476  770629 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1115 11:45:37.237857  770629 out.go:203] 
	W1115 11:45:37.240746  770629 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 11:45:37.243697  770629 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 11:45:37.246822  770629 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-872969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.794086241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.803816529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.804438387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.822312927Z" level=info msg="Created container 68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs/dashboard-metrics-scraper" id=c8f4afc7-d691-483c-9029-3b95506bd8ca name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.823347039Z" level=info msg="Starting container: 68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2" id=64cb7789-8630-4fcc-b0b4-0bc32a24eea0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:45:37 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.825464398Z" level=info msg="Started container" PID=1665 containerID=68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs/dashboard-metrics-scraper id=64cb7789-8630-4fcc-b0b4-0bc32a24eea0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3be693604f6b24026357f2fea53a4d46cc02f63e41ca8be7db54d6cf15f23408
	Nov 15 11:45:37 old-k8s-version-872969 conmon[1663]: conmon 68bb5d486b2287908b0f <ninfo>: container 1665 exited with status 1
	Nov 15 11:45:38 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:37.999790021Z" level=info msg="Removing container: 2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867" id=c9ed08c4-85b3-4b81-b40d-82f69b59bd05 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:45:38 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:38.012768298Z" level=info msg="Error loading conmon cgroup of container 2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867: cgroup deleted" id=c9ed08c4-85b3-4b81-b40d-82f69b59bd05 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:45:38 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:38.017238324Z" level=info msg="Removed container 2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs/dashboard-metrics-scraper" id=c9ed08c4-85b3-4b81-b40d-82f69b59bd05 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.70783203Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.71578491Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.715819979Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.715845432Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.719067517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.71910074Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.719124674Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.7225523Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.722586007Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.722609827Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.725691824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.725833528Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.725869746Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.729121706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:45:43 old-k8s-version-872969 crio[648]: time="2025-11-15T11:45:43.729153387Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	68bb5d486b228       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   3be693604f6b2       dashboard-metrics-scraper-5f989dc9cf-s57fs       kubernetes-dashboard
	02d2b5f4938e4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   6bdfeb259c748       storage-provisioner                              kube-system
	db82c7b1afbcf       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   080c8b6ffe429       kubernetes-dashboard-8694d4445c-9xc5k            kubernetes-dashboard
	804eef2c91baf       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   3c0eddbced8b4       coredns-5dd5756b68-rndhq                         kube-system
	ac16149e58273       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   47c604966ac05       kindnet-zmkg5                                    kube-system
	c918d4e74a7df       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   6bdfeb259c748       storage-provisioner                              kube-system
	d454980894d97       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   a5124b1da49b6       busybox                                          default
	32c5d8ff7931f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   6c4b51b39c069       kube-proxy-tgrgq                                 kube-system
	7aa6ea3c1cb8b       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   2be490bfc7af2       kube-scheduler-old-k8s-version-872969            kube-system
	14b3db3b917aa       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   2d41cc61eec1b       kube-apiserver-old-k8s-version-872969            kube-system
	b6f31762d19d0       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   f911225773966       kube-controller-manager-old-k8s-version-872969   kube-system
	a4ec05da29c9d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   93ba964508ada       etcd-old-k8s-version-872969                      kube-system
	
	
	==> coredns [804eef2c91baf9184c8c7c9e054bbd57aa28d6dd39d97e7b71f4fb811d18ba99] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34146 - 28918 "HINFO IN 4976347791005943752.2199682188033655270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023198201s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-872969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-872969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=old-k8s-version-872969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_43_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:43:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-872969
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:45:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:43:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:45:32 +0000   Sat, 15 Nov 2025 11:44:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-872969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                3b68266a-d7a6-4882-86de-e8553ea8772d
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-rndhq                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-872969                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-zmkg5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-872969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-872969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-tgrgq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-872969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-s57fs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-9xc5k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-872969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           106s               node-controller  Node old-k8s-version-872969 event: Registered Node old-k8s-version-872969 in Controller
	  Normal  NodeReady                92s                kubelet          Node old-k8s-version-872969 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-872969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-872969 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-872969 event: Registered Node old-k8s-version-872969 in Controller
	
	
	==> dmesg <==
	[Nov15 11:18] overlayfs: idmapped layers are currently not supported
	[Nov15 11:22] overlayfs: idmapped layers are currently not supported
	[Nov15 11:23] overlayfs: idmapped layers are currently not supported
	[Nov15 11:24] overlayfs: idmapped layers are currently not supported
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a4ec05da29c9d2dea7b01be81b7223bb05c63336ca59bfa38a4235ac8c2ea05f] <==
	{"level":"info","ts":"2025-11-15T11:44:57.785624Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T11:44:57.785663Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T11:44:57.786011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-15T11:44:57.786471Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-15T11:44:57.786634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T11:44:57.786695Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T11:44:57.788244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-15T11:44:57.788522Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T11:44:57.788589Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T11:44:57.78873Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:44:57.789021Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T11:44:59.054212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-15T11:44:59.05432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-15T11:44:59.054375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T11:44:59.054414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.054447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.054483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.054512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T11:44:59.057058Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-872969 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T11:44:59.057252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T11:44:59.058337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T11:44:59.058746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T11:44:59.060219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T11:44:59.060329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T11:44:59.064335Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:45:55 up  3:28,  0 user,  load average: 2.13, 3.07, 2.69
	Linux old-k8s-version-872969 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac16149e5827319ea9d87af9b598f570997a123f2b974665d5b967913bb2c2fe] <==
	I1115 11:45:03.551475       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:45:03.551704       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:45:03.551835       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:45:03.551846       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:45:03.551856       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:45:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:45:03.702662       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:45:03.702934       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:45:03.702980       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:45:03.703764       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:45:33.703597       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:45:33.703885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:45:33.703970       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:45:33.704049       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1115 11:45:35.303996       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:45:35.304029       1 metrics.go:72] Registering metrics
	I1115 11:45:35.304087       1 controller.go:711] "Syncing nftables rules"
	I1115 11:45:43.706729       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:45:43.706846       1 main.go:301] handling current node
	I1115 11:45:53.709234       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:45:53.709267       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14b3db3b917aa89b721a2b8851a6103f8c835a7cdcbc14441bc08d8ffa25f1c3] <==
	I1115 11:45:02.039541       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:45:02.060587       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:45:02.064548       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 11:45:02.064588       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 11:45:02.070389       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1115 11:45:02.064609       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1115 11:45:02.071416       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1115 11:45:02.088966       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1115 11:45:02.140649       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:45:02.797558       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:45:04.342441       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 11:45:04.451324       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 11:45:04.511874       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:45:04.545327       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:45:04.562665       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 11:45:04.654458       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.164.137"}
	I1115 11:45:04.683241       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.20.140"}
	E1115 11:45:12.072502       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I1115 11:45:14.456324       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 11:45:14.551907       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 11:45:14.570979       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1115 11:45:22.073945       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1115 11:45:32.074918       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1115 11:45:42.075934       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1115 11:45:52.076451       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [b6f31762d19d0b4a10be610346a7111f70357651c16509121df8b4ff7215a71f] <==
	I1115 11:45:14.538225       1 event.go:307] "Event occurred" object="old-k8s-version-872969" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-872969 event: Registered Node old-k8s-version-872969 in Controller"
	I1115 11:45:14.538529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.697241ms"
	I1115 11:45:14.538870       1 shared_informer.go:318] Caches are synced for TTL
	I1115 11:45:14.549063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.017577ms"
	I1115 11:45:14.571388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.200592ms"
	I1115 11:45:14.571549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.073µs"
	I1115 11:45:14.591285       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1115 11:45:14.599087       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 11:45:14.599646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.039797ms"
	I1115 11:45:14.606981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.139µs"
	I1115 11:45:14.614291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.536698ms"
	I1115 11:45:14.614506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.242µs"
	I1115 11:45:14.636467       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 11:45:14.992793       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 11:45:15.005589       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 11:45:15.005634       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 11:45:19.972944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="125.482µs"
	I1115 11:45:20.978579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.84µs"
	I1115 11:45:21.972508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.782µs"
	I1115 11:45:24.989929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.396799ms"
	I1115 11:45:24.990122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.568µs"
	I1115 11:45:35.650992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.678669ms"
	I1115 11:45:35.651093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.569µs"
	I1115 11:45:38.026135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.096µs"
	I1115 11:45:44.861777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.81µs"
	
	
	==> kube-proxy [32c5d8ff7931f51b39e4d677f3fa8990d64a8ce4f501f08b17cffa0a306cd10b] <==
	I1115 11:45:03.704990       1 server_others.go:69] "Using iptables proxy"
	I1115 11:45:03.751307       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 11:45:04.342270       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:45:04.345308       1 server_others.go:152] "Using iptables Proxier"
	I1115 11:45:04.345402       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 11:45:04.345435       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 11:45:04.345495       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 11:45:04.345723       1 server.go:846] "Version info" version="v1.28.0"
	I1115 11:45:04.345935       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:45:04.346624       1 config.go:188] "Starting service config controller"
	I1115 11:45:04.346692       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 11:45:04.346741       1 config.go:97] "Starting endpoint slice config controller"
	I1115 11:45:04.346769       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 11:45:04.347258       1 config.go:315] "Starting node config controller"
	I1115 11:45:04.347311       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 11:45:04.446995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 11:45:04.462788       1 shared_informer.go:318] Caches are synced for service config
	I1115 11:45:04.462858       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7aa6ea3c1cb8bb9a148b615de44117cceecc46195098acef3b88d91075fe34dc] <==
	I1115 11:45:02.363200       1 serving.go:348] Generated self-signed cert in-memory
	I1115 11:45:04.717762       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 11:45:04.718704       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:45:04.727021       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 11:45:04.727397       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1115 11:45:04.727443       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1115 11:45:04.727483       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 11:45:04.734669       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:45:04.734760       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 11:45:04.734806       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:45:04.734835       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1115 11:45:04.827517       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1115 11:45:04.835037       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1115 11:45:04.835040       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672418     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4d1ca727-bfad-4baa-95c1-8bdb23a987a4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-9xc5k\" (UID: \"4d1ca727-bfad-4baa-95c1-8bdb23a987a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9xc5k"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672481     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aab34a0a-0d02-4365-8701-3261373ad53a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-s57fs\" (UID: \"aab34a0a-0d02-4365-8701-3261373ad53a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672513     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6zj\" (UniqueName: \"kubernetes.io/projected/aab34a0a-0d02-4365-8701-3261373ad53a-kube-api-access-mf6zj\") pod \"dashboard-metrics-scraper-5f989dc9cf-s57fs\" (UID: \"aab34a0a-0d02-4365-8701-3261373ad53a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: I1115 11:45:14.672551     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xwp\" (UniqueName: \"kubernetes.io/projected/4d1ca727-bfad-4baa-95c1-8bdb23a987a4-kube-api-access-49xwp\") pod \"kubernetes-dashboard-8694d4445c-9xc5k\" (UID: \"4d1ca727-bfad-4baa-95c1-8bdb23a987a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9xc5k"
	Nov 15 11:45:14 old-k8s-version-872969 kubelet[774]: W1115 11:45:14.885988     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/661ed5bad40fd2c6093d81cd271dd1ee4df3c74dfd5a4fba2710fc9150682a80/crio-080c8b6ffe429a9f60eec51731210e1224213e5f0257265f9a1f2ef89a46dc4d WatchSource:0}: Error finding container 080c8b6ffe429a9f60eec51731210e1224213e5f0257265f9a1f2ef89a46dc4d: Status 404 returned error can't find the container with id 080c8b6ffe429a9f60eec51731210e1224213e5f0257265f9a1f2ef89a46dc4d
	Nov 15 11:45:19 old-k8s-version-872969 kubelet[774]: I1115 11:45:19.943277     774 scope.go:117] "RemoveContainer" containerID="c04bcb9a98d8b966faee98af0e017683131f1b10575a5ee2ec3406b6f152a1e5"
	Nov 15 11:45:20 old-k8s-version-872969 kubelet[774]: I1115 11:45:20.948292     774 scope.go:117] "RemoveContainer" containerID="c04bcb9a98d8b966faee98af0e017683131f1b10575a5ee2ec3406b6f152a1e5"
	Nov 15 11:45:20 old-k8s-version-872969 kubelet[774]: I1115 11:45:20.948578     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:20 old-k8s-version-872969 kubelet[774]: E1115 11:45:20.948961     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:21 old-k8s-version-872969 kubelet[774]: I1115 11:45:21.951618     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:21 old-k8s-version-872969 kubelet[774]: E1115 11:45:21.951905     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:24 old-k8s-version-872969 kubelet[774]: I1115 11:45:24.847020     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:24 old-k8s-version-872969 kubelet[774]: E1115 11:45:24.847331     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:33 old-k8s-version-872969 kubelet[774]: I1115 11:45:33.984053     774 scope.go:117] "RemoveContainer" containerID="c918d4e74a7df8005e44d4f479b40a185931fc4b765b4b81137bbcdb61810076"
	Nov 15 11:45:34 old-k8s-version-872969 kubelet[774]: I1115 11:45:34.009613     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9xc5k" podStartSLOduration=10.75295355 podCreationTimestamp="2025-11-15 11:45:14 +0000 UTC" firstStartedPulling="2025-11-15 11:45:14.891387796 +0000 UTC m=+18.285776196" lastFinishedPulling="2025-11-15 11:45:24.14796603 +0000 UTC m=+27.542354429" observedRunningTime="2025-11-15 11:45:24.977623204 +0000 UTC m=+28.372011612" watchObservedRunningTime="2025-11-15 11:45:34.009531783 +0000 UTC m=+37.403920191"
	Nov 15 11:45:37 old-k8s-version-872969 kubelet[774]: I1115 11:45:37.789650     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:38 old-k8s-version-872969 kubelet[774]: I1115 11:45:37.997525     774 scope.go:117] "RemoveContainer" containerID="2820b6305422e62ccef8aa423543efe6d7103bc8e4f3ca795817c7d105205867"
	Nov 15 11:45:38 old-k8s-version-872969 kubelet[774]: I1115 11:45:37.997748     774 scope.go:117] "RemoveContainer" containerID="68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2"
	Nov 15 11:45:38 old-k8s-version-872969 kubelet[774]: E1115 11:45:37.998059     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:44 old-k8s-version-872969 kubelet[774]: I1115 11:45:44.846803     774 scope.go:117] "RemoveContainer" containerID="68bb5d486b2287908b0fe52ab128a80f45c337c8da3f967e6a5c56fd065cf9f2"
	Nov 15 11:45:44 old-k8s-version-872969 kubelet[774]: E1115 11:45:44.847111     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s57fs_kubernetes-dashboard(aab34a0a-0d02-4365-8701-3261373ad53a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s57fs" podUID="aab34a0a-0d02-4365-8701-3261373ad53a"
	Nov 15 11:45:50 old-k8s-version-872969 kubelet[774]: I1115 11:45:50.600232     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 15 11:45:50 old-k8s-version-872969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:45:50 old-k8s-version-872969 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:45:50 old-k8s-version-872969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [db82c7b1afbcf518b92148f64445ec7c70f683303623edfa8dd13a0497384658] <==
	2025/11/15 11:45:24 Starting overwatch
	2025/11/15 11:45:24 Using namespace: kubernetes-dashboard
	2025/11/15 11:45:24 Using in-cluster config to connect to apiserver
	2025/11/15 11:45:24 Using secret token for csrf signing
	2025/11/15 11:45:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:45:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:45:24 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 11:45:24 Generating JWE encryption key
	2025/11/15 11:45:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:45:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:45:24 Initializing JWE encryption key from synchronized object
	2025/11/15 11:45:24 Creating in-cluster Sidecar client
	2025/11/15 11:45:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:45:24 Serving insecurely on HTTP port: 9090
	2025/11/15 11:45:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [02d2b5f4938e4f727c9ac593d2f34708d74a396c7d94efc50fd6294d92974c8a] <==
	I1115 11:45:34.041534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:45:34.057244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:45:34.057382       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 11:45:51.455022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:45:51.455195       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-872969_cf84b641-a354-4cdf-8aec-4bf68893efa7!
	I1115 11:45:51.456085       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81006117-e52a-4d02-8262-09cc8cbb9b80", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-872969_cf84b641-a354-4cdf-8aec-4bf68893efa7 became leader
	I1115 11:45:51.556049       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-872969_cf84b641-a354-4cdf-8aec-4bf68893efa7!
	
	
	==> storage-provisioner [c918d4e74a7df8005e44d4f479b40a185931fc4b765b4b81137bbcdb61810076] <==
	I1115 11:45:03.418507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:45:33.420219       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-872969 -n old-k8s-version-872969
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-872969 -n old-k8s-version-872969: exit status 2 (371.024353ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-872969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.164716ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:47:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-769461 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-769461 describe deploy/metrics-server -n kube-system: exit status 1 (93.133373ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-769461 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-769461
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-769461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054",
	        "Created": "2025-11-15T11:46:05.665660971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 774574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:46:05.751402149Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/hostname",
	        "HostsPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/hosts",
	        "LogPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054-json.log",
	        "Name": "/default-k8s-diff-port-769461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-769461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-769461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054",
	                "LowerDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-769461",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-769461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-769461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-769461",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-769461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7897db928fb7590211375563ace2ef92e714b25e2e15763878353205bbd52b0d",
	            "SandboxKey": "/var/run/docker/netns/7897db928fb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-769461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b0:e1:5b:f2:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "97f28ee3e21c22cb67f771931d4a0c5ff8297079a2da7de0d16d0518cb24266f",
	                    "EndpointID": "c398f5ca0daedc0b373b9b7ea3c2a6b147398abf9513879c08182a73ed4699f5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-769461",
	                        "6bc3c2610e90"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-769461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-769461 logs -n 25: (1.186228886s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-949287 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-949287                │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-949287 sudo crio config                                                                                                                                                                                                             │ cilium-949287                │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │                     │
	│ delete  │ -p cilium-949287                                                                                                                                                                                                                              │ cilium-949287                │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:41 UTC │
	│ start   │ -p force-systemd-env-386707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-386707     │ jenkins │ v1.37.0 │ 15 Nov 25 11:41 UTC │ 15 Nov 25 11:42 UTC │
	│ delete  │ -p kubernetes-upgrade-436490                                                                                                                                                                                                                  │ kubernetes-upgrade-436490    │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p force-systemd-env-386707                                                                                                                                                                                                                   │ force-systemd-env-386707     │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-options-303284 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ cert-options-303284 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ -p cert-options-303284 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p cert-options-303284                                                                                                                                                                                                                        │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │                     │
	│ stop    │ -p old-k8s-version-872969 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-872969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:46:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:46:46.451762  777983 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:46:46.451911  777983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:46:46.451933  777983 out.go:374] Setting ErrFile to fd 2...
	I1115 11:46:46.451940  777983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:46:46.452297  777983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:46:46.452785  777983 out.go:368] Setting JSON to false
	I1115 11:46:46.453888  777983 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12557,"bootTime":1763194649,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:46:46.454508  777983 start.go:143] virtualization:  
	I1115 11:46:46.458034  777983 out.go:179] * [embed-certs-404149] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:46:46.462544  777983 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:46:46.462634  777983 notify.go:221] Checking for updates...
	I1115 11:46:46.469124  777983 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:46:46.472351  777983 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:46:46.475951  777983 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:46:46.478917  777983 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:46:46.481904  777983 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:46:46.485441  777983 config.go:182] Loaded profile config "default-k8s-diff-port-769461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:46:46.485553  777983 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:46:46.521374  777983 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:46:46.521502  777983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:46:46.578213  777983 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:46:46.569088155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:46:46.578320  777983 docker.go:319] overlay module found
	I1115 11:46:46.581649  777983 out.go:179] * Using the docker driver based on user configuration
	I1115 11:46:46.584667  777983 start.go:309] selected driver: docker
	I1115 11:46:46.584690  777983 start.go:930] validating driver "docker" against <nil>
	I1115 11:46:46.584704  777983 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:46:46.585613  777983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:46:46.646282  777983 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:46:46.637370204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:46:46.646449  777983 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 11:46:46.646732  777983 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:46:46.649788  777983 out.go:179] * Using Docker driver with root privileges
	I1115 11:46:46.652709  777983 cni.go:84] Creating CNI manager for ""
	I1115 11:46:46.652775  777983 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:46:46.652788  777983 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:46:46.653015  777983 start.go:353] cluster config:
	{Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:46:46.657892  777983 out.go:179] * Starting "embed-certs-404149" primary control-plane node in "embed-certs-404149" cluster
	I1115 11:46:46.660720  777983 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:46:46.663877  777983 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:46:46.666725  777983 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:46:46.666775  777983 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:46:46.666791  777983 cache.go:65] Caching tarball of preloaded images
	I1115 11:46:46.666813  777983 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:46:46.666874  777983 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:46:46.666883  777983 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:46:46.666991  777983 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/config.json ...
	I1115 11:46:46.667007  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/config.json: {Name:mk2f751c5197229d904ae0fe3d73abb9778ba6eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:46:46.686705  777983 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:46:46.686729  777983 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:46:46.686745  777983 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:46:46.686768  777983 start.go:360] acquireMachinesLock for embed-certs-404149: {Name:mka215e00af293eebe84cec598dbc8661faf4dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:46:46.686876  777983 start.go:364] duration metric: took 87.205µs to acquireMachinesLock for "embed-certs-404149"
	I1115 11:46:46.686908  777983 start.go:93] Provisioning new machine with config: &{Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:46:46.686995  777983 start.go:125] createHost starting for "" (driver="docker")
	W1115 11:46:46.124820  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:46:48.624208  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	I1115 11:46:46.690434  777983 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:46:46.690666  777983 start.go:159] libmachine.API.Create for "embed-certs-404149" (driver="docker")
	I1115 11:46:46.690702  777983 client.go:173] LocalClient.Create starting
	I1115 11:46:46.690773  777983 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:46:46.690818  777983 main.go:143] libmachine: Decoding PEM data...
	I1115 11:46:46.690832  777983 main.go:143] libmachine: Parsing certificate...
	I1115 11:46:46.690886  777983 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:46:46.690902  777983 main.go:143] libmachine: Decoding PEM data...
	I1115 11:46:46.690912  777983 main.go:143] libmachine: Parsing certificate...
	I1115 11:46:46.691279  777983 cli_runner.go:164] Run: docker network inspect embed-certs-404149 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:46:46.708041  777983 cli_runner.go:211] docker network inspect embed-certs-404149 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:46:46.708138  777983 network_create.go:284] running [docker network inspect embed-certs-404149] to gather additional debugging logs...
	I1115 11:46:46.708154  777983 cli_runner.go:164] Run: docker network inspect embed-certs-404149
	W1115 11:46:46.724125  777983 cli_runner.go:211] docker network inspect embed-certs-404149 returned with exit code 1
	I1115 11:46:46.724164  777983 network_create.go:287] error running [docker network inspect embed-certs-404149]: docker network inspect embed-certs-404149: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-404149 not found
	I1115 11:46:46.724180  777983 network_create.go:289] output of [docker network inspect embed-certs-404149]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-404149 not found
	
	** /stderr **
	I1115 11:46:46.724280  777983 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:46:46.741809  777983 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:46:46.742188  777983 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:46:46.742577  777983 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:46:46.743012  777983 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fcfc0}
	I1115 11:46:46.743038  777983 network_create.go:124] attempt to create docker network embed-certs-404149 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 11:46:46.743093  777983 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-404149 embed-certs-404149
	I1115 11:46:46.811316  777983 network_create.go:108] docker network embed-certs-404149 192.168.76.0/24 created
	I1115 11:46:46.811351  777983 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-404149" container
	I1115 11:46:46.811443  777983 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:46:46.828203  777983 cli_runner.go:164] Run: docker volume create embed-certs-404149 --label name.minikube.sigs.k8s.io=embed-certs-404149 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:46:46.857174  777983 oci.go:103] Successfully created a docker volume embed-certs-404149
	I1115 11:46:46.857284  777983 cli_runner.go:164] Run: docker run --rm --name embed-certs-404149-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-404149 --entrypoint /usr/bin/test -v embed-certs-404149:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:46:47.440060  777983 oci.go:107] Successfully prepared a docker volume embed-certs-404149
	I1115 11:46:47.440130  777983 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:46:47.440142  777983 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 11:46:47.440214  777983 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-404149:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 11:46:50.624344  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:46:52.625127  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:46:54.625645  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	I1115 11:46:51.889837  777983 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-404149:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.449583817s)
	I1115 11:46:51.889873  777983 kic.go:203] duration metric: took 4.449726169s to extract preloaded images to volume ...
	W1115 11:46:51.890019  777983 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:46:51.890151  777983 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:46:51.956211  777983 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-404149 --name embed-certs-404149 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-404149 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-404149 --network embed-certs-404149 --ip 192.168.76.2 --volume embed-certs-404149:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:46:52.316393  777983 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Running}}
	I1115 11:46:52.340170  777983 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:46:52.364984  777983 cli_runner.go:164] Run: docker exec embed-certs-404149 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:46:52.417829  777983 oci.go:144] the created container "embed-certs-404149" has a running status.
	I1115 11:46:52.417858  777983 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa...
	I1115 11:46:53.477823  777983 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:46:53.499328  777983 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:46:53.523282  777983 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:46:53.523300  777983 kic_runner.go:114] Args: [docker exec --privileged embed-certs-404149 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:46:53.573449  777983 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:46:53.601323  777983 machine.go:94] provisionDockerMachine start ...
	I1115 11:46:53.601409  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:53.626751  777983 main.go:143] libmachine: Using SSH client type: native
	I1115 11:46:53.627231  777983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 11:46:53.627246  777983 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:46:53.808762  777983 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-404149
	
	I1115 11:46:53.808785  777983 ubuntu.go:182] provisioning hostname "embed-certs-404149"
	I1115 11:46:53.808846  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:53.830359  777983 main.go:143] libmachine: Using SSH client type: native
	I1115 11:46:53.830687  777983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 11:46:53.830710  777983 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-404149 && echo "embed-certs-404149" | sudo tee /etc/hostname
	I1115 11:46:54.005815  777983 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-404149
	
	I1115 11:46:54.005934  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:54.027787  777983 main.go:143] libmachine: Using SSH client type: native
	I1115 11:46:54.028129  777983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 11:46:54.028151  777983 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-404149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-404149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-404149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:46:54.197777  777983 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:46:54.197799  777983 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:46:54.197817  777983 ubuntu.go:190] setting up certificates
	I1115 11:46:54.197826  777983 provision.go:84] configureAuth start
	I1115 11:46:54.197887  777983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:46:54.216397  777983 provision.go:143] copyHostCerts
	I1115 11:46:54.216463  777983 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:46:54.216473  777983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:46:54.216549  777983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:46:54.216655  777983 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:46:54.216663  777983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:46:54.216700  777983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:46:54.216775  777983 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:46:54.216780  777983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:46:54.216802  777983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:46:54.216848  777983 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.embed-certs-404149 san=[127.0.0.1 192.168.76.2 embed-certs-404149 localhost minikube]
	I1115 11:46:54.954531  777983 provision.go:177] copyRemoteCerts
	I1115 11:46:54.954602  777983 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:46:54.954655  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:54.971929  777983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:46:55.104986  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:46:55.132725  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:46:55.165046  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:46:55.185436  777983 provision.go:87] duration metric: took 987.585382ms to configureAuth
	I1115 11:46:55.185515  777983 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:46:55.185744  777983 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:46:55.185898  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:55.204134  777983 main.go:143] libmachine: Using SSH client type: native
	I1115 11:46:55.204560  777983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 11:46:55.204597  777983 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:46:55.470537  777983 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:46:55.470609  777983 machine.go:97] duration metric: took 1.869257778s to provisionDockerMachine
	I1115 11:46:55.470635  777983 client.go:176] duration metric: took 8.779925274s to LocalClient.Create
	I1115 11:46:55.470679  777983 start.go:167] duration metric: took 8.779995502s to libmachine.API.Create "embed-certs-404149"
	I1115 11:46:55.470702  777983 start.go:293] postStartSetup for "embed-certs-404149" (driver="docker")
	I1115 11:46:55.470724  777983 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:46:55.470811  777983 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:46:55.470877  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:55.488430  777983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:46:55.597329  777983 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:46:55.600689  777983 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:46:55.600718  777983 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:46:55.600730  777983 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:46:55.600789  777983 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:46:55.600915  777983 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:46:55.601033  777983 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:46:55.608709  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:46:55.628965  777983 start.go:296] duration metric: took 158.235861ms for postStartSetup
	I1115 11:46:55.629405  777983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:46:55.648204  777983 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/config.json ...
	I1115 11:46:55.648490  777983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:46:55.648537  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:55.667406  777983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:46:55.770162  777983 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:46:55.775131  777983 start.go:128] duration metric: took 9.088119766s to createHost
	I1115 11:46:55.775156  777983 start.go:83] releasing machines lock for "embed-certs-404149", held for 9.088265949s
	I1115 11:46:55.775228  777983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:46:55.792591  777983 ssh_runner.go:195] Run: cat /version.json
	I1115 11:46:55.792652  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:55.793053  777983 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:46:55.793125  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:46:55.812989  777983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:46:55.825734  777983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:46:55.920917  777983 ssh_runner.go:195] Run: systemctl --version
	I1115 11:46:56.013306  777983 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:46:56.067583  777983 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:46:56.071878  777983 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:46:56.071946  777983 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:46:56.102788  777983 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:46:56.102810  777983 start.go:496] detecting cgroup driver to use...
	I1115 11:46:56.102844  777983 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:46:56.102898  777983 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:46:56.126623  777983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:46:56.139676  777983 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:46:56.139739  777983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:46:56.159664  777983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:46:56.177658  777983 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:46:56.307124  777983 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:46:56.436658  777983 docker.go:234] disabling docker service ...
	I1115 11:46:56.436732  777983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:46:56.463578  777983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:46:56.478400  777983 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:46:56.598789  777983 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:46:56.724498  777983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:46:56.738521  777983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:46:56.753337  777983 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:46:56.753410  777983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:46:56.762697  777983 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:46:56.762802  777983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:46:56.772272  777983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:46:56.782060  777983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:46:56.791092  777983 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:46:56.799248  777983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:46:56.807981  777983 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:46:56.821908  777983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:46:56.831191  777983 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:46:56.838928  777983 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:46:56.846392  777983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:46:56.960838  777983 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:46:57.096127  777983 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:46:57.096229  777983 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:46:57.100257  777983 start.go:564] Will wait 60s for crictl version
	I1115 11:46:57.100342  777983 ssh_runner.go:195] Run: which crictl
	I1115 11:46:57.103797  777983 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:46:57.141398  777983 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:46:57.141525  777983 ssh_runner.go:195] Run: crio --version
	I1115 11:46:57.173821  777983 ssh_runner.go:195] Run: crio --version
	I1115 11:46:57.207866  777983 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1115 11:46:57.123973  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:46:59.127999  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	I1115 11:46:57.210746  777983 cli_runner.go:164] Run: docker network inspect embed-certs-404149 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:46:57.226824  777983 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:46:57.230826  777983 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:46:57.241480  777983 kubeadm.go:884] updating cluster {Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:46:57.241603  777983 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:46:57.241665  777983 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:46:57.274638  777983 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:46:57.274662  777983 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:46:57.274720  777983 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:46:57.300075  777983 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:46:57.300100  777983 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:46:57.300108  777983 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:46:57.300193  777983 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-404149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:46:57.300281  777983 ssh_runner.go:195] Run: crio config
	I1115 11:46:57.352972  777983 cni.go:84] Creating CNI manager for ""
	I1115 11:46:57.352994  777983 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:46:57.353020  777983 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:46:57.353043  777983 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-404149 NodeName:embed-certs-404149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:46:57.353166  777983 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-404149"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:46:57.353239  777983 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:46:57.360989  777983 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:46:57.361117  777983 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:46:57.368518  777983 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 11:46:57.381191  777983 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:46:57.394171  777983 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 11:46:57.407564  777983 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:46:57.411250  777983 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:46:57.421433  777983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:46:57.535382  777983 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:46:57.555249  777983 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149 for IP: 192.168.76.2
	I1115 11:46:57.555269  777983 certs.go:195] generating shared ca certs ...
	I1115 11:46:57.555285  777983 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:46:57.555413  777983 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:46:57.555452  777983 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:46:57.555458  777983 certs.go:257] generating profile certs ...
	I1115 11:46:57.555519  777983 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.key
	I1115 11:46:57.555530  777983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.crt with IP's: []
	I1115 11:46:58.069280  777983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.crt ...
	I1115 11:46:58.069359  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.crt: {Name:mkf5977d2c9a2af4a5c27b146e05e90a6a80b074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:46:58.069580  777983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.key ...
	I1115 11:46:58.069620  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.key: {Name:mk5cb7435b388c2885d82c97a892b178bec62e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:46:58.069774  777983 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key.feb77388
	I1115 11:46:58.069817  777983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt.feb77388 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 11:46:58.732134  777983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt.feb77388 ...
	I1115 11:46:58.732167  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt.feb77388: {Name:mk9b04de4b0d363bee2c0e2f583d1438bb3e9444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:46:58.732344  777983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key.feb77388 ...
	I1115 11:46:58.732359  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key.feb77388: {Name:mkfaa3c316cb0b00632396b5979bd572254e2dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:46:58.732447  777983 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt.feb77388 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt
	I1115 11:46:58.732524  777983 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key.feb77388 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key
	I1115 11:46:58.732583  777983 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key
	I1115 11:46:58.732602  777983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.crt with IP's: []
	I1115 11:47:00.890115  777983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.crt ...
	I1115 11:47:00.890148  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.crt: {Name:mk348c3081d02e9ba5f0eb89a4ca64b01549e706 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:00.890339  777983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key ...
	I1115 11:47:00.890353  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key: {Name:mk1641d2f8536bec4e618fb51733cbe561f103cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:00.890547  777983 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:47:00.890590  777983 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:47:00.890600  777983 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:47:00.890636  777983 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:47:00.890664  777983 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:47:00.890689  777983 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:47:00.890734  777983 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:47:00.891397  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:47:00.912052  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:47:00.931211  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:47:00.948607  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:47:00.966510  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 11:47:00.985771  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:47:01.006679  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:47:01.025303  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 11:47:01.044645  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:47:01.064237  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:47:01.082930  777983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:47:01.102855  777983 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:47:01.118849  777983 ssh_runner.go:195] Run: openssl version
	I1115 11:47:01.125496  777983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:47:01.134041  777983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:47:01.137971  777983 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:47:01.138054  777983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:47:01.179637  777983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:47:01.189671  777983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:47:01.199670  777983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:47:01.203886  777983 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:47:01.203959  777983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:47:01.245794  777983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:47:01.254868  777983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:47:01.263522  777983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:47:01.267615  777983 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:47:01.267707  777983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:47:01.309913  777983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:47:01.318771  777983 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:47:01.322416  777983 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:47:01.322489  777983 kubeadm.go:401] StartCluster: {Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:47:01.322567  777983 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:47:01.322629  777983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:47:01.353089  777983 cri.go:89] found id: ""
	I1115 11:47:01.353249  777983 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:47:01.361512  777983 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:47:01.369610  777983 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:47:01.369755  777983 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:47:01.377796  777983 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:47:01.377817  777983 kubeadm.go:158] found existing configuration files:
	
	I1115 11:47:01.377902  777983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:47:01.385806  777983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:47:01.385892  777983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:47:01.394767  777983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:47:01.403111  777983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:47:01.403186  777983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:47:01.411324  777983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:47:01.421931  777983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:47:01.422049  777983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:47:01.436173  777983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:47:01.448221  777983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:47:01.448342  777983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:47:01.456522  777983 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:47:01.506212  777983 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 11:47:01.506382  777983 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:47:01.532284  777983 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:47:01.532474  777983 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:47:01.532544  777983 kubeadm.go:319] OS: Linux
	I1115 11:47:01.532623  777983 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:47:01.532704  777983 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:47:01.532779  777983 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:47:01.532900  777983 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:47:01.532987  777983 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:47:01.533082  777983 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:47:01.533166  777983 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:47:01.533245  777983 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:47:01.533321  777983 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:47:01.608627  777983 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:47:01.608810  777983 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:47:01.609019  777983 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 11:47:01.621474  777983 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 11:47:01.624450  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:47:03.625139  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	I1115 11:47:01.624695  777983 out.go:252]   - Generating certificates and keys ...
	I1115 11:47:01.624833  777983 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:47:01.624983  777983 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:47:01.882892  777983 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:47:02.172033  777983 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 11:47:02.636279  777983 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 11:47:02.960163  777983 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:47:03.106398  777983 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:47:03.106776  777983 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-404149 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:47:03.638502  777983 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:47:03.638866  777983 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-404149 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:47:04.550570  777983 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:47:05.496438  777983 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:47:06.320851  777983 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:47:06.321063  777983 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:47:06.553163  777983 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:47:07.585284  777983 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 11:47:07.825277  777983 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:47:08.083706  777983 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:47:08.567716  777983 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:47:08.568379  777983 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:47:08.571273  777983 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 11:47:06.123930  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:47:08.124555  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	I1115 11:47:08.574900  777983 out.go:252]   - Booting up control plane ...
	I1115 11:47:08.575027  777983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 11:47:08.575115  777983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 11:47:08.575190  777983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 11:47:08.591188  777983 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 11:47:08.591509  777983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 11:47:08.600799  777983 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 11:47:08.601662  777983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 11:47:08.601720  777983 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 11:47:08.745066  777983 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 11:47:08.745196  777983 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 11:47:09.746223  777983 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001463321s
	I1115 11:47:09.749887  777983 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 11:47:09.749995  777983 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 11:47:09.750246  777983 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 11:47:09.750344  777983 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1115 11:47:10.627332  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:47:13.131670  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	I1115 11:47:12.642684  777983 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.892168182s
	I1115 11:47:14.832352  777983 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.082381293s
	I1115 11:47:16.251304  777983 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501306392s
	I1115 11:47:16.278477  777983 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:47:16.294408  777983 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:47:16.316604  777983 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:47:16.317124  777983 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-404149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:47:16.330721  777983 kubeadm.go:319] [bootstrap-token] Using token: s5mrxi.vph9y050gzpvkaay
	I1115 11:47:16.333808  777983 out.go:252]   - Configuring RBAC rules ...
	I1115 11:47:16.333932  777983 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 11:47:16.342159  777983 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 11:47:16.351766  777983 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 11:47:16.362381  777983 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 11:47:16.371105  777983 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 11:47:16.375876  777983 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 11:47:16.658790  777983 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 11:47:17.090510  777983 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 11:47:17.658733  777983 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 11:47:17.659917  777983 kubeadm.go:319] 
	I1115 11:47:17.659995  777983 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 11:47:17.660005  777983 kubeadm.go:319] 
	I1115 11:47:17.660081  777983 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 11:47:17.660089  777983 kubeadm.go:319] 
	I1115 11:47:17.660114  777983 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 11:47:17.660176  777983 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 11:47:17.660229  777983 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 11:47:17.660237  777983 kubeadm.go:319] 
	I1115 11:47:17.660291  777983 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 11:47:17.660298  777983 kubeadm.go:319] 
	I1115 11:47:17.660346  777983 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 11:47:17.660354  777983 kubeadm.go:319] 
	I1115 11:47:17.660405  777983 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 11:47:17.660483  777983 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 11:47:17.660554  777983 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 11:47:17.660562  777983 kubeadm.go:319] 
	I1115 11:47:17.660645  777983 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 11:47:17.660730  777983 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 11:47:17.660742  777983 kubeadm.go:319] 
	I1115 11:47:17.660825  777983 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s5mrxi.vph9y050gzpvkaay \
	I1115 11:47:17.660959  777983 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 11:47:17.660986  777983 kubeadm.go:319] 	--control-plane 
	I1115 11:47:17.660996  777983 kubeadm.go:319] 
	I1115 11:47:17.661085  777983 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 11:47:17.661094  777983 kubeadm.go:319] 
	I1115 11:47:17.661175  777983 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s5mrxi.vph9y050gzpvkaay \
	I1115 11:47:17.661281  777983 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 11:47:17.665759  777983 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 11:47:17.665997  777983 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 11:47:17.666117  777983 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 11:47:17.666140  777983 cni.go:84] Creating CNI manager for ""
	I1115 11:47:17.666149  777983 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:47:17.669410  777983 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 11:47:15.623904  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	W1115 11:47:17.624695  774191 node_ready.go:57] node "default-k8s-diff-port-769461" has "Ready":"False" status (will retry)
	I1115 11:47:19.147122  774191 node_ready.go:49] node "default-k8s-diff-port-769461" is "Ready"
	I1115 11:47:19.147150  774191 node_ready.go:38] duration metric: took 39.526149275s for node "default-k8s-diff-port-769461" to be "Ready" ...
	I1115 11:47:19.147164  774191 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:47:19.147222  774191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:47:19.165365  774191 api_server.go:72] duration metric: took 41.565295395s to wait for apiserver process to appear ...
	I1115 11:47:19.165387  774191 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:47:19.165406  774191 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 11:47:19.186317  774191 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 11:47:19.187488  774191 api_server.go:141] control plane version: v1.34.1
	I1115 11:47:19.187517  774191 api_server.go:131] duration metric: took 22.123695ms to wait for apiserver health ...
	I1115 11:47:19.187531  774191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:47:19.191549  774191 system_pods.go:59] 8 kube-system pods found
	I1115 11:47:19.191582  774191 system_pods.go:61] "coredns-66bc5c9577-xpkjw" [70eed49b-a283-4cc7-ac67-71e32653ab35] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:47:19.191588  774191 system_pods.go:61] "etcd-default-k8s-diff-port-769461" [af98b066-3f75-431d-80f7-4acee1838af0] Running
	I1115 11:47:19.191595  774191 system_pods.go:61] "kindnet-kzp2q" [64bdadbe-69c1-445f-85af-a9efd841c7b9] Running
	I1115 11:47:19.191599  774191 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-769461" [b571a160-49cb-4df1-b2a7-d48a6e3b4ffe] Running
	I1115 11:47:19.191606  774191 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-769461" [ed50fc98-2e05-498f-b214-6efbdfbb592d] Running
	I1115 11:47:19.191611  774191 system_pods.go:61] "kube-proxy-j8s2w" [dbf02ced-547a-4bfd-b59d-1ff41c5da369] Running
	I1115 11:47:19.191615  774191 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-769461" [d6423eb5-3513-48c3-ab04-640e8b8ba7c9] Running
	I1115 11:47:19.191620  774191 system_pods.go:61] "storage-provisioner" [221d3633-db7a-4b63-8cb1-84cb8a39832d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:47:19.191627  774191 system_pods.go:74] duration metric: took 4.082967ms to wait for pod list to return data ...
	I1115 11:47:19.191635  774191 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:47:19.201922  774191 default_sa.go:45] found service account: "default"
	I1115 11:47:19.201998  774191 default_sa.go:55] duration metric: took 10.355726ms for default service account to be created ...
	I1115 11:47:19.202022  774191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:47:19.218805  774191 system_pods.go:86] 8 kube-system pods found
	I1115 11:47:19.218892  774191 system_pods.go:89] "coredns-66bc5c9577-xpkjw" [70eed49b-a283-4cc7-ac67-71e32653ab35] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:47:19.218918  774191 system_pods.go:89] "etcd-default-k8s-diff-port-769461" [af98b066-3f75-431d-80f7-4acee1838af0] Running
	I1115 11:47:19.218968  774191 system_pods.go:89] "kindnet-kzp2q" [64bdadbe-69c1-445f-85af-a9efd841c7b9] Running
	I1115 11:47:19.218991  774191 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-769461" [b571a160-49cb-4df1-b2a7-d48a6e3b4ffe] Running
	I1115 11:47:19.219010  774191 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-769461" [ed50fc98-2e05-498f-b214-6efbdfbb592d] Running
	I1115 11:47:19.219030  774191 system_pods.go:89] "kube-proxy-j8s2w" [dbf02ced-547a-4bfd-b59d-1ff41c5da369] Running
	I1115 11:47:19.219048  774191 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-769461" [d6423eb5-3513-48c3-ab04-640e8b8ba7c9] Running
	I1115 11:47:19.219118  774191 system_pods.go:89] "storage-provisioner" [221d3633-db7a-4b63-8cb1-84cb8a39832d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:47:19.219163  774191 retry.go:31] will retry after 309.230003ms: missing components: kube-dns
	I1115 11:47:19.531927  774191 system_pods.go:86] 8 kube-system pods found
	I1115 11:47:19.531958  774191 system_pods.go:89] "coredns-66bc5c9577-xpkjw" [70eed49b-a283-4cc7-ac67-71e32653ab35] Running
	I1115 11:47:19.531965  774191 system_pods.go:89] "etcd-default-k8s-diff-port-769461" [af98b066-3f75-431d-80f7-4acee1838af0] Running
	I1115 11:47:19.531971  774191 system_pods.go:89] "kindnet-kzp2q" [64bdadbe-69c1-445f-85af-a9efd841c7b9] Running
	I1115 11:47:19.531976  774191 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-769461" [b571a160-49cb-4df1-b2a7-d48a6e3b4ffe] Running
	I1115 11:47:19.531981  774191 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-769461" [ed50fc98-2e05-498f-b214-6efbdfbb592d] Running
	I1115 11:47:19.531984  774191 system_pods.go:89] "kube-proxy-j8s2w" [dbf02ced-547a-4bfd-b59d-1ff41c5da369] Running
	I1115 11:47:19.531989  774191 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-769461" [d6423eb5-3513-48c3-ab04-640e8b8ba7c9] Running
	I1115 11:47:19.531992  774191 system_pods.go:89] "storage-provisioner" [221d3633-db7a-4b63-8cb1-84cb8a39832d] Running
	I1115 11:47:19.532000  774191 system_pods.go:126] duration metric: took 329.959165ms to wait for k8s-apps to be running ...
	I1115 11:47:19.532012  774191 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:47:19.532067  774191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:47:19.545088  774191 system_svc.go:56] duration metric: took 13.065156ms WaitForService to wait for kubelet
	I1115 11:47:19.545117  774191 kubeadm.go:587] duration metric: took 41.945053099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:47:19.545156  774191 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:47:19.548105  774191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:47:19.548138  774191 node_conditions.go:123] node cpu capacity is 2
	I1115 11:47:19.548151  774191 node_conditions.go:105] duration metric: took 2.986545ms to run NodePressure ...
	I1115 11:47:19.548162  774191 start.go:242] waiting for startup goroutines ...
	I1115 11:47:19.548192  774191 start.go:247] waiting for cluster config update ...
	I1115 11:47:19.548215  774191 start.go:256] writing updated cluster config ...
	I1115 11:47:19.548508  774191 ssh_runner.go:195] Run: rm -f paused
	I1115 11:47:19.551883  774191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:47:19.555300  774191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xpkjw" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:19.560485  774191 pod_ready.go:94] pod "coredns-66bc5c9577-xpkjw" is "Ready"
	I1115 11:47:19.560510  774191 pod_ready.go:86] duration metric: took 5.181268ms for pod "coredns-66bc5c9577-xpkjw" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:19.562700  774191 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:19.568150  774191 pod_ready.go:94] pod "etcd-default-k8s-diff-port-769461" is "Ready"
	I1115 11:47:19.568228  774191 pod_ready.go:86] duration metric: took 5.50672ms for pod "etcd-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:19.572773  774191 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:19.577705  774191 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-769461" is "Ready"
	I1115 11:47:19.577748  774191 pod_ready.go:86] duration metric: took 4.949003ms for pod "kube-apiserver-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:19.580203  774191 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:19.956231  774191 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-769461" is "Ready"
	I1115 11:47:19.956256  774191 pod_ready.go:86] duration metric: took 376.028662ms for pod "kube-controller-manager-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:20.156764  774191 pod_ready.go:83] waiting for pod "kube-proxy-j8s2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:20.555653  774191 pod_ready.go:94] pod "kube-proxy-j8s2w" is "Ready"
	I1115 11:47:20.555682  774191 pod_ready.go:86] duration metric: took 398.890693ms for pod "kube-proxy-j8s2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:20.756765  774191 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:21.156119  774191 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-769461" is "Ready"
	I1115 11:47:21.156149  774191 pod_ready.go:86] duration metric: took 399.356732ms for pod "kube-scheduler-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:47:21.156163  774191 pod_ready.go:40] duration metric: took 1.604247465s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:47:21.244460  774191 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:47:21.247566  774191 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-769461" cluster and "default" namespace by default
	I1115 11:47:17.672347  777983 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 11:47:17.676490  777983 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 11:47:17.676511  777983 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 11:47:17.689771  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 11:47:18.013211  777983 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 11:47:18.013363  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:18.013450  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-404149 minikube.k8s.io/updated_at=2025_11_15T11_47_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=embed-certs-404149 minikube.k8s.io/primary=true
	I1115 11:47:18.307104  777983 ops.go:34] apiserver oom_adj: -16
	I1115 11:47:18.307207  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:18.808077  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:19.307284  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:19.808187  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:20.307835  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:20.807375  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:21.307959  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:21.808217  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:22.307781  777983 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:47:22.481685  777983 kubeadm.go:1114] duration metric: took 4.468371611s to wait for elevateKubeSystemPrivileges
	I1115 11:47:22.481715  777983 kubeadm.go:403] duration metric: took 21.15923074s to StartCluster
	I1115 11:47:22.481732  777983 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:22.481803  777983 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:47:22.483319  777983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:22.483590  777983 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:47:22.483798  777983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 11:47:22.484122  777983 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:47:22.484158  777983 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:47:22.484218  777983 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-404149"
	I1115 11:47:22.484237  777983 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-404149"
	I1115 11:47:22.484261  777983 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:47:22.484528  777983 addons.go:70] Setting default-storageclass=true in profile "embed-certs-404149"
	I1115 11:47:22.484550  777983 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-404149"
	I1115 11:47:22.484843  777983 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:47:22.485330  777983 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:47:22.489372  777983 out.go:179] * Verifying Kubernetes components...
	I1115 11:47:22.496052  777983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:47:22.536803  777983 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:47:22.540151  777983 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:47:22.540175  777983 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:47:22.540266  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:47:22.542668  777983 addons.go:239] Setting addon default-storageclass=true in "embed-certs-404149"
	I1115 11:47:22.542715  777983 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:47:22.543152  777983 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:47:22.587384  777983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:47:22.587423  777983 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:47:22.587437  777983 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:47:22.587504  777983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:47:22.617369  777983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:47:22.838175  777983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 11:47:22.871587  777983 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:47:22.952611  777983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:47:22.960464  777983 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:47:23.507113  777983 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 11:47:23.509309  777983 node_ready.go:35] waiting up to 6m0s for node "embed-certs-404149" to be "Ready" ...
	I1115 11:47:23.804564  777983 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 11:47:23.807473  777983 addons.go:515] duration metric: took 1.3232948s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 11:47:24.014381  777983 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-404149" context rescaled to 1 replicas
	W1115 11:47:25.512327  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 11:47:19 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:19.070662976Z" level=info msg="Created container 0cc6fb4a73598423a616a484eead266008564e7846311c5fe0d0e2881544941e: kube-system/coredns-66bc5c9577-xpkjw/coredns" id=e32adc49-3e42-4891-8f9c-35f8e31b03dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:47:19 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:19.077635403Z" level=info msg="Starting container: 0cc6fb4a73598423a616a484eead266008564e7846311c5fe0d0e2881544941e" id=4cb5b6b3-4cb0-4277-8e50-7f795d806338 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:47:19 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:19.08662583Z" level=info msg="Started container" PID=1731 containerID=0cc6fb4a73598423a616a484eead266008564e7846311c5fe0d0e2881544941e description=kube-system/coredns-66bc5c9577-xpkjw/coredns id=4cb5b6b3-4cb0-4277-8e50-7f795d806338 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0b1109bfc89e3b41d447b81243795f1457b95a9224702529fbd5691346e590b
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.852372179Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5a32d3d6-953e-4b9a-a1cd-999d3cdea4a1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.852445279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.864263315Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bcd954c22fd8e1df6b5911014fc506ac68cf11f11d5e88dff42fe1cd6f0c336f UID:c2df1c11-9c1f-46d6-ad9f-04f87ba7c040 NetNS:/var/run/netns/0f961f7f-dddd-4515-b2e8-2744613c7127 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004ea868}] Aliases:map[]}"
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.864437578Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.882201026Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bcd954c22fd8e1df6b5911014fc506ac68cf11f11d5e88dff42fe1cd6f0c336f UID:c2df1c11-9c1f-46d6-ad9f-04f87ba7c040 NetNS:/var/run/netns/0f961f7f-dddd-4515-b2e8-2744613c7127 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004ea868}] Aliases:map[]}"
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.882505866Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.890598265Z" level=info msg="Ran pod sandbox bcd954c22fd8e1df6b5911014fc506ac68cf11f11d5e88dff42fe1cd6f0c336f with infra container: default/busybox/POD" id=5a32d3d6-953e-4b9a-a1cd-999d3cdea4a1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.892091713Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7437321e-53c1-4714-84d8-4dfef504098c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.892348651Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7437321e-53c1-4714-84d8-4dfef504098c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.892453079Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7437321e-53c1-4714-84d8-4dfef504098c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.896325648Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0655f434-4bb4-4403-973d-9c2bf46a3851 name=/runtime.v1.ImageService/PullImage
	Nov 15 11:47:21 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:21.901480167Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.120411423Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0655f434-4bb4-4403-973d-9c2bf46a3851 name=/runtime.v1.ImageService/PullImage
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.12110707Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=07de6979-a652-47bf-ab3d-2eb74e36b0b4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.12257061Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99b038d3-8f1a-4594-95b8-5862e70a1f46 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.128129655Z" level=info msg="Creating container: default/busybox/busybox" id=b2eacc71-b169-4b42-b15e-1c4b095090c7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.128257919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.133167512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.13368646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.150800826Z" level=info msg="Created container d7ba5cba6c0ad20be189888d71c6938d79ac51e8f1abe384e0b7c46ca82e25e8: default/busybox/busybox" id=b2eacc71-b169-4b42-b15e-1c4b095090c7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.153521709Z" level=info msg="Starting container: d7ba5cba6c0ad20be189888d71c6938d79ac51e8f1abe384e0b7c46ca82e25e8" id=02485bf9-46e4-4fca-a60b-2a34bedd82fc name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:47:24 default-k8s-diff-port-769461 crio[837]: time="2025-11-15T11:47:24.155794186Z" level=info msg="Started container" PID=1788 containerID=d7ba5cba6c0ad20be189888d71c6938d79ac51e8f1abe384e0b7c46ca82e25e8 description=default/busybox/busybox id=02485bf9-46e4-4fca-a60b-2a34bedd82fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=bcd954c22fd8e1df6b5911014fc506ac68cf11f11d5e88dff42fe1cd6f0c336f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	d7ba5cba6c0ad       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago        Running             busybox                   0                   bcd954c22fd8e       busybox                                                default
	0cc6fb4a73598       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   f0b1109bfc89e       coredns-66bc5c9577-xpkjw                               kube-system
	bfe1b92ae8cfa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   32090fb945c62       storage-provisioner                                    kube-system
	e18d815c6e248       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   76dfc693a269f       kindnet-kzp2q                                          kube-system
	6d9ea092cda4f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   ccd8985760395       kube-proxy-j8s2w                                       kube-system
	df41b93c33b10       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   fa020dbab106c       kube-scheduler-default-k8s-diff-port-769461            kube-system
	c79376eb83311       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   f2508f57d6876       kube-controller-manager-default-k8s-diff-port-769461   kube-system
	998671a7a442e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   b45dbb6911a54       etcd-default-k8s-diff-port-769461                      kube-system
	376d573be59ce       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   279fac855023d       kube-apiserver-default-k8s-diff-port-769461            kube-system
	
	
	==> coredns [0cc6fb4a73598423a616a484eead266008564e7846311c5fe0d0e2881544941e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58529 - 49976 "HINFO IN 8712647795487518134.6645522543656348514. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048998178s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-769461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-769461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=default-k8s-diff-port-769461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_46_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:46:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-769461
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:47:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:47:18 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:47:18 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:47:18 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:47:18 +0000   Sat, 15 Nov 2025 11:47:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-769461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                2d12c0bf-fabd-4e79-9141-b51555b040a7
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-xpkjw                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-769461                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-kzp2q                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-769461             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-769461    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-j8s2w                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-769461             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 68s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 68s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 68s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-769461 event: Registered Node default-k8s-diff-port-769461 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-769461 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 11:23] overlayfs: idmapped layers are currently not supported
	[Nov15 11:24] overlayfs: idmapped layers are currently not supported
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [998671a7a442e9947c64960822774a1511aea6040146f142b3aafa4c8a509009] <==
	{"level":"warn","ts":"2025-11-15T11:46:27.346992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.361904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.386928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.403984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.424183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.436128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.459302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.472015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.539260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.556718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.575875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.617207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.641271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.689665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.723057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.777387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.789810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.821092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.865133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.917489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.975017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:27.997515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:28.028987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:46:28.126807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53444","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T11:46:35.326801Z","caller":"traceutil/trace.go:172","msg":"trace[566410632] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"274.433873ms","start":"2025-11-15T11:46:35.052352Z","end":"2025-11-15T11:46:35.326786Z","steps":["trace[566410632] 'process raft request'  (duration: 274.334295ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:47:31 up  3:30,  0 user,  load average: 3.19, 3.36, 2.84
	Linux default-k8s-diff-port-769461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e18d815c6e248b0602017d7f7a9140d8ae49af7b23dd2d840593b36129323560] <==
	I1115 11:46:37.938434       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:46:37.943970       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:46:37.944107       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:46:37.944119       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:46:37.944129       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:46:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:46:38.191412       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:46:38.191442       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:46:38.191466       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:46:38.191589       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:47:08.193393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:47:08.194378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:47:08.195420       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:47:08.195511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1115 11:47:09.792003       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:47:09.792035       1 metrics.go:72] Registering metrics
	I1115 11:47:09.792091       1 controller.go:711] "Syncing nftables rules"
	I1115 11:47:18.192556       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:47:18.192664       1 main.go:301] handling current node
	I1115 11:47:28.192171       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:47:28.192205       1 main.go:301] handling current node
	
	
	==> kube-apiserver [376d573be59ceb27be6ad0c0045a1f03a824ca1bf0519bd9c7c64c75294c643b] <==
	I1115 11:46:29.229851       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:46:29.232257       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:46:29.310225       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 11:46:29.310486       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:46:29.349885       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:46:29.349952       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:46:29.406462       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:46:29.832023       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 11:46:29.837444       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 11:46:29.837474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:46:30.683030       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:46:30.737196       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:46:30.847162       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 11:46:30.855224       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 11:46:30.856422       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:46:30.861562       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:46:31.092285       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:46:32.106710       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:46:32.126340       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 11:46:32.159001       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 11:46:36.850205       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 11:46:37.088647       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:46:37.408712       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:46:37.454806       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1115 11:47:29.655900       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:42898: use of closed network connection
	
	
	==> kube-controller-manager [c79376eb8331188d964f3b9995eaf8fcdd905c9d02963b865825da8054afc055] <==
	I1115 11:46:36.392573       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:46:36.392668       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:46:36.393326       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 11:46:36.393388       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:46:36.397083       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 11:46:36.397540       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:46:36.397930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:46:36.397985       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:46:36.399664       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 11:46:36.400312       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 11:46:36.400387       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 11:46:36.400452       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-769461"
	I1115 11:46:36.400496       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:46:36.401591       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:46:36.403598       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:46:36.412533       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:46:36.416817       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 11:46:36.430310       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:46:36.442244       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:46:36.456154       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:46:36.465311       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-769461" podCIDRs=["10.244.0.0/24"]
	I1115 11:46:36.483241       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:46:36.483287       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:46:36.483295       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:47:21.407549       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6d9ea092cda4f1f013b36d8e37c1345d265436e86b4be2891e25d172c1645d06] <==
	I1115 11:46:37.925910       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:46:38.322004       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:46:38.522732       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:46:38.522776       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:46:38.522867       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:46:38.665568       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:46:38.665618       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:46:38.674978       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:46:38.675299       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:46:38.675571       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:46:38.676776       1 config.go:200] "Starting service config controller"
	I1115 11:46:38.676786       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:46:38.676802       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:46:38.676806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:46:38.676818       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:46:38.676822       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:46:38.677515       1 config.go:309] "Starting node config controller"
	I1115 11:46:38.677532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:46:38.677538       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:46:38.779916       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:46:38.779956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:46:38.780005       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [df41b93c33b10da2b57ee360cfe6842634d4544da83e4ee774d95e0c05e26837] <==
	I1115 11:46:29.544804       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:46:29.545749       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:46:29.545839       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 11:46:29.555642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:46:29.555819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:46:29.557069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:46:29.557491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:46:29.557639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:46:29.557694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:46:29.557851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:46:29.557888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:46:29.557922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:46:29.557976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:46:29.558028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:46:29.558066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:46:29.558117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:46:29.561347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:46:29.561457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:46:29.561654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:46:29.561720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:46:29.564124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:46:29.565380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:46:30.431895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:46:30.442139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1115 11:46:31.144999       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:46:36 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:36.557584    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 11:46:36 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:36.906077    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zk7k\" (UniqueName: \"kubernetes.io/projected/dbf02ced-547a-4bfd-b59d-1ff41c5da369-kube-api-access-9zk7k\") pod \"kube-proxy-j8s2w\" (UID: \"dbf02ced-547a-4bfd-b59d-1ff41c5da369\") " pod="kube-system/kube-proxy-j8s2w"
	Nov 15 11:46:36 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:36.906146    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbf02ced-547a-4bfd-b59d-1ff41c5da369-xtables-lock\") pod \"kube-proxy-j8s2w\" (UID: \"dbf02ced-547a-4bfd-b59d-1ff41c5da369\") " pod="kube-system/kube-proxy-j8s2w"
	Nov 15 11:46:36 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:36.906173    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbf02ced-547a-4bfd-b59d-1ff41c5da369-kube-proxy\") pod \"kube-proxy-j8s2w\" (UID: \"dbf02ced-547a-4bfd-b59d-1ff41c5da369\") " pod="kube-system/kube-proxy-j8s2w"
	Nov 15 11:46:36 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:36.906190    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbf02ced-547a-4bfd-b59d-1ff41c5da369-lib-modules\") pod \"kube-proxy-j8s2w\" (UID: \"dbf02ced-547a-4bfd-b59d-1ff41c5da369\") " pod="kube-system/kube-proxy-j8s2w"
	Nov 15 11:46:37 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:37.006766    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64bdadbe-69c1-445f-85af-a9efd841c7b9-xtables-lock\") pod \"kindnet-kzp2q\" (UID: \"64bdadbe-69c1-445f-85af-a9efd841c7b9\") " pod="kube-system/kindnet-kzp2q"
	Nov 15 11:46:37 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:37.008420    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64bdadbe-69c1-445f-85af-a9efd841c7b9-lib-modules\") pod \"kindnet-kzp2q\" (UID: \"64bdadbe-69c1-445f-85af-a9efd841c7b9\") " pod="kube-system/kindnet-kzp2q"
	Nov 15 11:46:37 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:37.008528    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/64bdadbe-69c1-445f-85af-a9efd841c7b9-cni-cfg\") pod \"kindnet-kzp2q\" (UID: \"64bdadbe-69c1-445f-85af-a9efd841c7b9\") " pod="kube-system/kindnet-kzp2q"
	Nov 15 11:46:37 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:37.008607    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64h7f\" (UniqueName: \"kubernetes.io/projected/64bdadbe-69c1-445f-85af-a9efd841c7b9-kube-api-access-64h7f\") pod \"kindnet-kzp2q\" (UID: \"64bdadbe-69c1-445f-85af-a9efd841c7b9\") " pod="kube-system/kindnet-kzp2q"
	Nov 15 11:46:37 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:37.148741    1303 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:46:37 default-k8s-diff-port-769461 kubelet[1303]: W1115 11:46:37.249665    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-ccd8985760395850dfdcc452a02d3525348042fc013e94398241c4f555ca59ca WatchSource:0}: Error finding container ccd8985760395850dfdcc452a02d3525348042fc013e94398241c4f555ca59ca: Status 404 returned error can't find the container with id ccd8985760395850dfdcc452a02d3525348042fc013e94398241c4f555ca59ca
	Nov 15 11:46:37 default-k8s-diff-port-769461 kubelet[1303]: W1115 11:46:37.546494    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-76dfc693a269f5f1d2978d207de24e194f980b2c55b747cfc8b70d7b20246b93 WatchSource:0}: Error finding container 76dfc693a269f5f1d2978d207de24e194f980b2c55b747cfc8b70d7b20246b93: Status 404 returned error can't find the container with id 76dfc693a269f5f1d2978d207de24e194f980b2c55b747cfc8b70d7b20246b93
	Nov 15 11:46:38 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:38.293865    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kzp2q" podStartSLOduration=2.293849853 podStartE2EDuration="2.293849853s" podCreationTimestamp="2025-11-15 11:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:46:38.293841664 +0000 UTC m=+6.373761899" watchObservedRunningTime="2025-11-15 11:46:38.293849853 +0000 UTC m=+6.373770080"
	Nov 15 11:46:38 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:46:38.445889    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j8s2w" podStartSLOduration=2.445872454 podStartE2EDuration="2.445872454s" podCreationTimestamp="2025-11-15 11:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:46:38.3799315 +0000 UTC m=+6.459851735" watchObservedRunningTime="2025-11-15 11:46:38.445872454 +0000 UTC m=+6.525792689"
	Nov 15 11:47:18 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:18.616942    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 11:47:18 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:18.818253    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8xt\" (UniqueName: \"kubernetes.io/projected/70eed49b-a283-4cc7-ac67-71e32653ab35-kube-api-access-fd8xt\") pod \"coredns-66bc5c9577-xpkjw\" (UID: \"70eed49b-a283-4cc7-ac67-71e32653ab35\") " pod="kube-system/coredns-66bc5c9577-xpkjw"
	Nov 15 11:47:18 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:18.818451    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwk5m\" (UniqueName: \"kubernetes.io/projected/221d3633-db7a-4b63-8cb1-84cb8a39832d-kube-api-access-hwk5m\") pod \"storage-provisioner\" (UID: \"221d3633-db7a-4b63-8cb1-84cb8a39832d\") " pod="kube-system/storage-provisioner"
	Nov 15 11:47:18 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:18.818551    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70eed49b-a283-4cc7-ac67-71e32653ab35-config-volume\") pod \"coredns-66bc5c9577-xpkjw\" (UID: \"70eed49b-a283-4cc7-ac67-71e32653ab35\") " pod="kube-system/coredns-66bc5c9577-xpkjw"
	Nov 15 11:47:18 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:18.818663    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/221d3633-db7a-4b63-8cb1-84cb8a39832d-tmp\") pod \"storage-provisioner\" (UID: \"221d3633-db7a-4b63-8cb1-84cb8a39832d\") " pod="kube-system/storage-provisioner"
	Nov 15 11:47:18 default-k8s-diff-port-769461 kubelet[1303]: W1115 11:47:18.976246    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-32090fb945c62b33dd175685b60f7788bf048b166967a5b2435800ff13d79437 WatchSource:0}: Error finding container 32090fb945c62b33dd175685b60f7788bf048b166967a5b2435800ff13d79437: Status 404 returned error can't find the container with id 32090fb945c62b33dd175685b60f7788bf048b166967a5b2435800ff13d79437
	Nov 15 11:47:19 default-k8s-diff-port-769461 kubelet[1303]: W1115 11:47:19.001110    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-f0b1109bfc89e3b41d447b81243795f1457b95a9224702529fbd5691346e590b WatchSource:0}: Error finding container f0b1109bfc89e3b41d447b81243795f1457b95a9224702529fbd5691346e590b: Status 404 returned error can't find the container with id f0b1109bfc89e3b41d447b81243795f1457b95a9224702529fbd5691346e590b
	Nov 15 11:47:19 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:19.309047    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xpkjw" podStartSLOduration=42.308935466 podStartE2EDuration="42.308935466s" podCreationTimestamp="2025-11-15 11:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:47:19.285480828 +0000 UTC m=+47.365401055" watchObservedRunningTime="2025-11-15 11:47:19.308935466 +0000 UTC m=+47.388855701"
	Nov 15 11:47:19 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:19.328824    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.328802142 podStartE2EDuration="40.328802142s" podCreationTimestamp="2025-11-15 11:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:47:19.31084935 +0000 UTC m=+47.390769585" watchObservedRunningTime="2025-11-15 11:47:19.328802142 +0000 UTC m=+47.408722369"
	Nov 15 11:47:21 default-k8s-diff-port-769461 kubelet[1303]: I1115 11:47:21.640350    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtwz\" (UniqueName: \"kubernetes.io/projected/c2df1c11-9c1f-46d6-ad9f-04f87ba7c040-kube-api-access-gjtwz\") pod \"busybox\" (UID: \"c2df1c11-9c1f-46d6-ad9f-04f87ba7c040\") " pod="default/busybox"
	Nov 15 11:47:21 default-k8s-diff-port-769461 kubelet[1303]: W1115 11:47:21.887410    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-bcd954c22fd8e1df6b5911014fc506ac68cf11f11d5e88dff42fe1cd6f0c336f WatchSource:0}: Error finding container bcd954c22fd8e1df6b5911014fc506ac68cf11f11d5e88dff42fe1cd6f0c336f: Status 404 returned error can't find the container with id bcd954c22fd8e1df6b5911014fc506ac68cf11f11d5e88dff42fe1cd6f0c336f
	
	
	==> storage-provisioner [bfe1b92ae8cfaeb907f4c1d366c0a8919b3bbf76de9e70d24eead4b38af3b673] <==
	I1115 11:47:19.060574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:47:19.074540       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:47:19.074643       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 11:47:19.087898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:19.096015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:47:19.096276       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:47:19.096497       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-769461_3080802c-2d5f-4122-a267-1ea0cef59cc5!
	I1115 11:47:19.145134       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c930a73f-6b14-48e2-977d-fde466625e84", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-769461_3080802c-2d5f-4122-a267-1ea0cef59cc5 became leader
	W1115 11:47:19.193178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:19.238293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:47:19.300536       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-769461_3080802c-2d5f-4122-a267-1ea0cef59cc5!
	W1115 11:47:21.244473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:21.250445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:23.253597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:23.260280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:25.263975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:25.273652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:27.278778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:27.286106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:29.289233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:29.294345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:31.298392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:47:31.306028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-769461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (299.428869ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:48:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-404149 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-404149 describe deploy/metrics-server -n kube-system: exit status 1 (81.345336ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-404149 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-404149
helpers_test.go:243: (dbg) docker inspect embed-certs-404149:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408",
	        "Created": "2025-11-15T11:46:51.97222958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 778375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:46:52.050729055Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/hostname",
	        "HostsPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/hosts",
	        "LogPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408-json.log",
	        "Name": "/embed-certs-404149",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-404149:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-404149",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408",
	                "LowerDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-404149",
	                "Source": "/var/lib/docker/volumes/embed-certs-404149/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-404149",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-404149",
	                "name.minikube.sigs.k8s.io": "embed-certs-404149",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0d09b2bf5ab60d53bd22b2c708fe13964a377197566eecf05d73ad3cc232776",
	            "SandboxKey": "/var/run/docker/netns/a0d09b2bf5ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-404149": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:ab:59:76:89:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bb35a9e63004fb5710c19eaa0fed0c73a27efd3fdd5fdafde151cb4543696cc",
	                    "EndpointID": "36af1a99a3f3f3fa58f9793bee5b580c39e2e6f45d0f182c525415e0f832cea4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-404149",
	                        "69e998144c08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-404149 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-404149 logs -n 25: (1.31496473s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-436490                                                                                                                                                                                                                  │ kubernetes-upgrade-436490    │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p force-systemd-env-386707                                                                                                                                                                                                                   │ force-systemd-env-386707     │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:42 UTC │
	│ start   │ -p cert-options-303284 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:42 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ cert-options-303284 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ ssh     │ -p cert-options-303284 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p cert-options-303284                                                                                                                                                                                                                        │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │                     │
	│ stop    │ -p old-k8s-version-872969 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-872969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-769461 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:47:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:47:44.472667  781316 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:47:44.472802  781316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:47:44.472819  781316 out.go:374] Setting ErrFile to fd 2...
	I1115 11:47:44.472827  781316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:47:44.474122  781316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:47:44.474559  781316 out.go:368] Setting JSON to false
	I1115 11:47:44.475625  781316 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12615,"bootTime":1763194649,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:47:44.475694  781316 start.go:143] virtualization:  
	I1115 11:47:44.479490  781316 out.go:179] * [default-k8s-diff-port-769461] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:47:44.483205  781316 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:47:44.483358  781316 notify.go:221] Checking for updates...
	I1115 11:47:44.488832  781316 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:47:44.491691  781316 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:47:44.494499  781316 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:47:44.497306  781316 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:47:44.500074  781316 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:47:44.503276  781316 config.go:182] Loaded profile config "default-k8s-diff-port-769461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:47:44.503979  781316 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:47:44.537794  781316 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:47:44.537930  781316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:47:44.598056  781316 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:47:44.587628591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:47:44.598175  781316 docker.go:319] overlay module found
	I1115 11:47:44.601265  781316 out.go:179] * Using the docker driver based on existing profile
	I1115 11:47:44.604056  781316 start.go:309] selected driver: docker
	I1115 11:47:44.604074  781316 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-769461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-769461 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:47:44.604174  781316 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:47:44.605199  781316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:47:44.674332  781316 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:47:44.663633858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:47:44.674693  781316 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:47:44.674736  781316 cni.go:84] Creating CNI manager for ""
	I1115 11:47:44.674808  781316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:47:44.674847  781316 start.go:353] cluster config:
	{Name:default-k8s-diff-port-769461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-769461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:47:44.679752  781316 out.go:179] * Starting "default-k8s-diff-port-769461" primary control-plane node in "default-k8s-diff-port-769461" cluster
	I1115 11:47:44.682674  781316 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:47:44.685815  781316 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:47:44.688702  781316 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:47:44.688754  781316 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:47:44.688771  781316 cache.go:65] Caching tarball of preloaded images
	I1115 11:47:44.688795  781316 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:47:44.688953  781316 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:47:44.688965  781316 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:47:44.689114  781316 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/config.json ...
	I1115 11:47:44.711061  781316 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:47:44.711086  781316 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:47:44.711100  781316 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:47:44.711135  781316 start.go:360] acquireMachinesLock for default-k8s-diff-port-769461: {Name:mk7714bd8be801cd42d2b51435d0d40bee6ca46d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:47:44.711211  781316 start.go:364] duration metric: took 53.498µs to acquireMachinesLock for "default-k8s-diff-port-769461"
	I1115 11:47:44.711250  781316 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:47:44.711261  781316 fix.go:54] fixHost starting: 
	I1115 11:47:44.711539  781316 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-769461 --format={{.State.Status}}
	I1115 11:47:44.729557  781316 fix.go:112] recreateIfNeeded on default-k8s-diff-port-769461: state=Stopped err=<nil>
	W1115 11:47:44.729588  781316 fix.go:138] unexpected machine state, will restart: <nil>
	W1115 11:47:41.512307  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	W1115 11:47:44.012630  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	W1115 11:47:46.014682  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	I1115 11:47:44.733008  781316 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-769461" ...
	I1115 11:47:44.733138  781316 cli_runner.go:164] Run: docker start default-k8s-diff-port-769461
	I1115 11:47:45.009060  781316 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-769461 --format={{.State.Status}}
	I1115 11:47:45.046293  781316 kic.go:430] container "default-k8s-diff-port-769461" state is running.
	I1115 11:47:45.046990  781316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-769461
	I1115 11:47:45.084522  781316 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/config.json ...
	I1115 11:47:45.085255  781316 machine.go:94] provisionDockerMachine start ...
	I1115 11:47:45.085365  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:45.119568  781316 main.go:143] libmachine: Using SSH client type: native
	I1115 11:47:45.119947  781316 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 11:47:45.119969  781316 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:47:45.120831  781316 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:47:48.277720  781316 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-769461
	
	I1115 11:47:48.277742  781316 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-769461"
	I1115 11:47:48.277830  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:48.295631  781316 main.go:143] libmachine: Using SSH client type: native
	I1115 11:47:48.295963  781316 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 11:47:48.295980  781316 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-769461 && echo "default-k8s-diff-port-769461" | sudo tee /etc/hostname
	I1115 11:47:48.467785  781316 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-769461
	
	I1115 11:47:48.467869  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:48.487003  781316 main.go:143] libmachine: Using SSH client type: native
	I1115 11:47:48.487408  781316 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 11:47:48.487431  781316 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-769461' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-769461/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-769461' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:47:48.637037  781316 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:47:48.637132  781316 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:47:48.637187  781316 ubuntu.go:190] setting up certificates
	I1115 11:47:48.637222  781316 provision.go:84] configureAuth start
	I1115 11:47:48.637316  781316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-769461
	I1115 11:47:48.654821  781316 provision.go:143] copyHostCerts
	I1115 11:47:48.654891  781316 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:47:48.654907  781316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:47:48.654992  781316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:47:48.655089  781316 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:47:48.655100  781316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:47:48.655127  781316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:47:48.655183  781316 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:47:48.655192  781316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:47:48.655216  781316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:47:48.655267  781316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-769461 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-769461 localhost minikube]
	I1115 11:47:49.285575  781316 provision.go:177] copyRemoteCerts
	I1115 11:47:49.285641  781316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:47:49.285690  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:49.304048  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:49.408635  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:47:49.426689  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 11:47:49.444676  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:47:49.462590  781316 provision.go:87] duration metric: took 825.329414ms to configureAuth
	I1115 11:47:49.462663  781316 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:47:49.462911  781316 config.go:182] Loaded profile config "default-k8s-diff-port-769461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:47:49.463081  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	W1115 11:47:48.513098  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	W1115 11:47:51.013377  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	I1115 11:47:49.482246  781316 main.go:143] libmachine: Using SSH client type: native
	I1115 11:47:49.482563  781316 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 11:47:49.482591  781316 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:47:49.808435  781316 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:47:49.808460  781316 machine.go:97] duration metric: took 4.723175496s to provisionDockerMachine
	I1115 11:47:49.808471  781316 start.go:293] postStartSetup for "default-k8s-diff-port-769461" (driver="docker")
	I1115 11:47:49.808482  781316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:47:49.808552  781316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:47:49.808601  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:49.830534  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:49.936937  781316 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:47:49.940355  781316 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:47:49.940385  781316 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:47:49.940397  781316 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:47:49.940451  781316 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:47:49.940564  781316 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:47:49.940674  781316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:47:49.948215  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:47:49.966901  781316 start.go:296] duration metric: took 158.414318ms for postStartSetup
	I1115 11:47:49.966978  781316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:47:49.967033  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:49.985125  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:50.090386  781316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:47:50.095851  781316 fix.go:56] duration metric: took 5.384582918s for fixHost
	I1115 11:47:50.095875  781316 start.go:83] releasing machines lock for "default-k8s-diff-port-769461", held for 5.384643924s
	I1115 11:47:50.095953  781316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-769461
	I1115 11:47:50.115084  781316 ssh_runner.go:195] Run: cat /version.json
	I1115 11:47:50.115119  781316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:47:50.115134  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:50.115184  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:50.139063  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:50.150586  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:50.331091  781316 ssh_runner.go:195] Run: systemctl --version
	I1115 11:47:50.337675  781316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:47:50.373400  781316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:47:50.377719  781316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:47:50.377787  781316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:47:50.385877  781316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:47:50.385902  781316 start.go:496] detecting cgroup driver to use...
	I1115 11:47:50.385933  781316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:47:50.385985  781316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:47:50.400785  781316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:47:50.414039  781316 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:47:50.414111  781316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:47:50.429214  781316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:47:50.442766  781316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:47:50.579162  781316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:47:50.708444  781316 docker.go:234] disabling docker service ...
	I1115 11:47:50.708517  781316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:47:50.724259  781316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:47:50.738019  781316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:47:50.875510  781316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:47:50.994380  781316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:47:51.012110  781316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:47:51.030148  781316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:47:51.030236  781316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:47:51.040150  781316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:47:51.040249  781316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:47:51.049853  781316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:47:51.066278  781316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:47:51.077502  781316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:47:51.086644  781316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:47:51.097432  781316 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:47:51.107900  781316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:47:51.119458  781316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:47:51.127851  781316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:47:51.135933  781316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:47:51.274522  781316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:47:51.407770  781316 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:47:51.407880  781316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:47:51.412621  781316 start.go:564] Will wait 60s for crictl version
	I1115 11:47:51.412739  781316 ssh_runner.go:195] Run: which crictl
	I1115 11:47:51.416804  781316 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:47:51.445686  781316 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:47:51.445819  781316 ssh_runner.go:195] Run: crio --version
	I1115 11:47:51.476650  781316 ssh_runner.go:195] Run: crio --version
	I1115 11:47:51.510566  781316 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:47:51.513379  781316 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-769461 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:47:51.529055  781316 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:47:51.532797  781316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:47:51.542335  781316 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-769461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-769461 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:47:51.542460  781316 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:47:51.542532  781316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:47:51.575779  781316 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:47:51.575806  781316 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:47:51.575863  781316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:47:51.601778  781316 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:47:51.601802  781316 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:47:51.601810  781316 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 11:47:51.601905  781316 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-769461 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-769461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:47:51.601990  781316 ssh_runner.go:195] Run: crio config
	I1115 11:47:51.664573  781316 cni.go:84] Creating CNI manager for ""
	I1115 11:47:51.664598  781316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:47:51.664624  781316 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:47:51.664655  781316 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-769461 NodeName:default-k8s-diff-port-769461 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:47:51.664810  781316 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-769461"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:47:51.664935  781316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:47:51.674441  781316 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:47:51.674508  781316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:47:51.682510  781316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 11:47:51.696818  781316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:47:51.710205  781316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 11:47:51.724610  781316 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:47:51.728494  781316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:47:51.738563  781316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:47:51.861842  781316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:47:51.880233  781316 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461 for IP: 192.168.85.2
	I1115 11:47:51.880296  781316 certs.go:195] generating shared ca certs ...
	I1115 11:47:51.880324  781316 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:51.880506  781316 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:47:51.880585  781316 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:47:51.880620  781316 certs.go:257] generating profile certs ...
	I1115 11:47:51.880755  781316 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.key
	I1115 11:47:51.880884  781316 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/apiserver.key.9e2f2122
	I1115 11:47:51.880955  781316 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/proxy-client.key
	I1115 11:47:51.881109  781316 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:47:51.881164  781316 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:47:51.881190  781316 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:47:51.881244  781316 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:47:51.881302  781316 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:47:51.881358  781316 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:47:51.881443  781316 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:47:51.882064  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:47:51.904921  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:47:51.935453  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:47:51.958881  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:47:51.990940  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 11:47:52.025587  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:47:52.049770  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:47:52.081942  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:47:52.107061  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:47:52.126689  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:47:52.149947  781316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:47:52.167832  781316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:47:52.182320  781316 ssh_runner.go:195] Run: openssl version
	I1115 11:47:52.188754  781316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:47:52.197634  781316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:47:52.201896  781316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:47:52.202007  781316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:47:52.246912  781316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:47:52.255149  781316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:47:52.263685  781316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:47:52.267937  781316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:47:52.268064  781316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:47:52.310504  781316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:47:52.318385  781316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:47:52.326975  781316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:47:52.332180  781316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:47:52.332248  781316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:47:52.373315  781316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:47:52.381174  781316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:47:52.384961  781316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:47:52.426106  781316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:47:52.471797  781316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:47:52.518193  781316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:47:52.579332  781316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:47:52.661837  781316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:47:52.745707  781316 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-769461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-769461 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:47:52.745809  781316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:47:52.745938  781316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:47:52.803492  781316 cri.go:89] found id: "58a8cafbd658243739209adc98b5cca4fb51708fc98f57d93b11c6d97859707b"
	I1115 11:47:52.803555  781316 cri.go:89] found id: "c28f3e68692e829f48e01931512e3679a6223533e56ed8f074c9d056fafd4609"
	I1115 11:47:52.803586  781316 cri.go:89] found id: "1222b8dec2b50ece8a4af1cb27e223b6a0079f14fc1c5ecf88240ddba9fe0ee0"
	I1115 11:47:52.803602  781316 cri.go:89] found id: "faf86f2f211634e1d17c6370364e838bc04fe0108542f93851f68044cecfe2f9"
	I1115 11:47:52.803631  781316 cri.go:89] found id: ""
	I1115 11:47:52.803711  781316 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:47:52.819991  781316 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:47:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:47:52.820126  781316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:47:52.842139  781316 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:47:52.842161  781316 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:47:52.842242  781316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:47:52.858672  781316 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:47:52.859627  781316 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-769461" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:47:52.860237  781316 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-769461" cluster setting kubeconfig missing "default-k8s-diff-port-769461" context setting]
	I1115 11:47:52.861201  781316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:52.863159  781316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:47:52.880139  781316 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 11:47:52.880176  781316 kubeadm.go:602] duration metric: took 38.008741ms to restartPrimaryControlPlane
	I1115 11:47:52.880212  781316 kubeadm.go:403] duration metric: took 134.516419ms to StartCluster
	I1115 11:47:52.880234  781316 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:52.880319  781316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:47:52.882132  781316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:47:52.882531  781316 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:47:52.882935  781316 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:47:52.883015  781316 config.go:182] Loaded profile config "default-k8s-diff-port-769461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:47:52.883026  781316 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-769461"
	I1115 11:47:52.883039  781316 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-769461"
	W1115 11:47:52.883045  781316 addons.go:248] addon dashboard should already be in state true
	I1115 11:47:52.883017  781316 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-769461"
	I1115 11:47:52.883066  781316 host.go:66] Checking if "default-k8s-diff-port-769461" exists ...
	I1115 11:47:52.883076  781316 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-769461"
	W1115 11:47:52.883083  781316 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:47:52.883102  781316 host.go:66] Checking if "default-k8s-diff-port-769461" exists ...
	I1115 11:47:52.883512  781316 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-769461 --format={{.State.Status}}
	I1115 11:47:52.883581  781316 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-769461 --format={{.State.Status}}
	I1115 11:47:52.884228  781316 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-769461"
	I1115 11:47:52.884250  781316 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-769461"
	I1115 11:47:52.884519  781316 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-769461 --format={{.State.Status}}
	I1115 11:47:52.886975  781316 out.go:179] * Verifying Kubernetes components...
	I1115 11:47:52.892956  781316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:47:52.928902  781316 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:47:52.932003  781316 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:47:52.932026  781316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:47:52.932097  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:52.946663  781316 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:47:52.950127  781316 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 11:47:52.952948  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:47:52.952975  781316 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:47:52.953051  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:52.958268  781316 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-769461"
	W1115 11:47:52.958310  781316 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:47:52.958335  781316 host.go:66] Checking if "default-k8s-diff-port-769461" exists ...
	I1115 11:47:52.958765  781316 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-769461 --format={{.State.Status}}
	I1115 11:47:52.965191  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:52.999429  781316 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:47:52.999458  781316 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:47:52.999529  781316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:47:53.007509  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:53.029473  781316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:47:53.289325  781316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:47:53.298078  781316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:47:53.342925  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:47:53.343000  781316 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:47:53.370413  781316 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-769461" to be "Ready" ...
	I1115 11:47:53.382298  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:47:53.382323  781316 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:47:53.405571  781316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:47:53.434584  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:47:53.434659  781316 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:47:53.473693  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:47:53.473713  781316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:47:53.544026  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:47:53.544051  781316 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:47:53.604280  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:47:53.604306  781316 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:47:53.704652  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:47:53.704677  781316 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:47:53.726418  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:47:53.726442  781316 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:47:53.750056  781316 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:47:53.750082  781316 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:47:53.773686  781316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1115 11:47:53.512630  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	W1115 11:47:55.512815  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	I1115 11:47:57.883462  781316 node_ready.go:49] node "default-k8s-diff-port-769461" is "Ready"
	I1115 11:47:57.883493  781316 node_ready.go:38] duration metric: took 4.513024165s for node "default-k8s-diff-port-769461" to be "Ready" ...
	I1115 11:47:57.883507  781316 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:47:57.883564  781316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:47:59.358674  781316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.060511531s)
	I1115 11:47:59.358732  781316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.953093415s)
	I1115 11:47:59.384678  781316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.610947661s)
	I1115 11:47:59.384918  781316 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.501336478s)
	I1115 11:47:59.384939  781316 api_server.go:72] duration metric: took 6.502375457s to wait for apiserver process to appear ...
	I1115 11:47:59.384945  781316 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:47:59.384963  781316 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 11:47:59.387678  781316 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-769461 addons enable metrics-server
	
	I1115 11:47:59.390464  781316 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 11:47:59.393994  781316 addons.go:515] duration metric: took 6.511053986s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 11:47:59.394743  781316 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:47:59.394763  781316 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:47:58.012925  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	W1115 11:48:00.020111  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	I1115 11:47:59.885619  781316 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 11:47:59.894575  781316 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 11:47:59.895630  781316 api_server.go:141] control plane version: v1.34.1
	I1115 11:47:59.895654  781316 api_server.go:131] duration metric: took 510.702895ms to wait for apiserver health ...
	I1115 11:47:59.895664  781316 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:47:59.899062  781316 system_pods.go:59] 8 kube-system pods found
	I1115 11:47:59.899099  781316 system_pods.go:61] "coredns-66bc5c9577-xpkjw" [70eed49b-a283-4cc7-ac67-71e32653ab35] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:47:59.899109  781316 system_pods.go:61] "etcd-default-k8s-diff-port-769461" [af98b066-3f75-431d-80f7-4acee1838af0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:47:59.899121  781316 system_pods.go:61] "kindnet-kzp2q" [64bdadbe-69c1-445f-85af-a9efd841c7b9] Running
	I1115 11:47:59.899139  781316 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-769461" [b571a160-49cb-4df1-b2a7-d48a6e3b4ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:47:59.899147  781316 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-769461" [ed50fc98-2e05-498f-b214-6efbdfbb592d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:47:59.899152  781316 system_pods.go:61] "kube-proxy-j8s2w" [dbf02ced-547a-4bfd-b59d-1ff41c5da369] Running
	I1115 11:47:59.899161  781316 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-769461" [d6423eb5-3513-48c3-ab04-640e8b8ba7c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:47:59.899166  781316 system_pods.go:61] "storage-provisioner" [221d3633-db7a-4b63-8cb1-84cb8a39832d] Running
	I1115 11:47:59.899174  781316 system_pods.go:74] duration metric: took 3.504375ms to wait for pod list to return data ...
	I1115 11:47:59.899187  781316 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:47:59.901621  781316 default_sa.go:45] found service account: "default"
	I1115 11:47:59.901645  781316 default_sa.go:55] duration metric: took 2.452392ms for default service account to be created ...
	I1115 11:47:59.901655  781316 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:47:59.904556  781316 system_pods.go:86] 8 kube-system pods found
	I1115 11:47:59.904590  781316 system_pods.go:89] "coredns-66bc5c9577-xpkjw" [70eed49b-a283-4cc7-ac67-71e32653ab35] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:47:59.904602  781316 system_pods.go:89] "etcd-default-k8s-diff-port-769461" [af98b066-3f75-431d-80f7-4acee1838af0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:47:59.904609  781316 system_pods.go:89] "kindnet-kzp2q" [64bdadbe-69c1-445f-85af-a9efd841c7b9] Running
	I1115 11:47:59.904616  781316 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-769461" [b571a160-49cb-4df1-b2a7-d48a6e3b4ffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:47:59.904629  781316 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-769461" [ed50fc98-2e05-498f-b214-6efbdfbb592d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:47:59.904640  781316 system_pods.go:89] "kube-proxy-j8s2w" [dbf02ced-547a-4bfd-b59d-1ff41c5da369] Running
	I1115 11:47:59.904649  781316 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-769461" [d6423eb5-3513-48c3-ab04-640e8b8ba7c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:47:59.904656  781316 system_pods.go:89] "storage-provisioner" [221d3633-db7a-4b63-8cb1-84cb8a39832d] Running
	I1115 11:47:59.904664  781316 system_pods.go:126] duration metric: took 3.003053ms to wait for k8s-apps to be running ...
	I1115 11:47:59.904677  781316 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:47:59.904733  781316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:47:59.917713  781316 system_svc.go:56] duration metric: took 13.027114ms WaitForService to wait for kubelet
	I1115 11:47:59.917740  781316 kubeadm.go:587] duration metric: took 7.035176803s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:47:59.917759  781316 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:47:59.920415  781316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:47:59.920444  781316 node_conditions.go:123] node cpu capacity is 2
	I1115 11:47:59.920458  781316 node_conditions.go:105] duration metric: took 2.693765ms to run NodePressure ...
	I1115 11:47:59.920496  781316 start.go:242] waiting for startup goroutines ...
	I1115 11:47:59.920511  781316 start.go:247] waiting for cluster config update ...
	I1115 11:47:59.920523  781316 start.go:256] writing updated cluster config ...
	I1115 11:47:59.920837  781316 ssh_runner.go:195] Run: rm -f paused
	I1115 11:47:59.924575  781316 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:47:59.929616  781316 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xpkjw" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:48:01.935203  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:03.936830  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:02.512837  777983 node_ready.go:57] node "embed-certs-404149" has "Ready":"False" status (will retry)
	I1115 11:48:04.013602  777983 node_ready.go:49] node "embed-certs-404149" is "Ready"
	I1115 11:48:04.013654  777983 node_ready.go:38] duration metric: took 40.504302955s for node "embed-certs-404149" to be "Ready" ...
	I1115 11:48:04.013687  777983 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:48:04.013753  777983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:48:04.040724  777983 api_server.go:72] duration metric: took 41.557093264s to wait for apiserver process to appear ...
	I1115 11:48:04.040751  777983 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:48:04.040770  777983 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:48:04.071757  777983 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 11:48:04.078717  777983 api_server.go:141] control plane version: v1.34.1
	I1115 11:48:04.078750  777983 api_server.go:131] duration metric: took 37.992057ms to wait for apiserver health ...
	I1115 11:48:04.078760  777983 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:48:04.099920  777983 system_pods.go:59] 8 kube-system pods found
	I1115 11:48:04.099969  777983 system_pods.go:61] "coredns-66bc5c9577-2l449" [5e943487-c90a-4a5d-8954-6d44870ececc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:48:04.099977  777983 system_pods.go:61] "etcd-embed-certs-404149" [061e2652-8536-4564-bd3c-aa1d961acc3d] Running
	I1115 11:48:04.099984  777983 system_pods.go:61] "kindnet-qsvh7" [65b3cd6e-66ac-4934-91d3-16fdc27af287] Running
	I1115 11:48:04.099989  777983 system_pods.go:61] "kube-apiserver-embed-certs-404149" [df336c4d-f7c7-4ec6-98d8-dc1aef88cea7] Running
	I1115 11:48:04.099994  777983 system_pods.go:61] "kube-controller-manager-embed-certs-404149" [cb5308c4-97af-4752-9cd2-856eb8d915fb] Running
	I1115 11:48:04.099999  777983 system_pods.go:61] "kube-proxy-5d2lb" [be30c5c3-f080-4721-b6d8-2f18f7736abe] Running
	I1115 11:48:04.100007  777983 system_pods.go:61] "kube-scheduler-embed-certs-404149" [808c1b05-090a-4dd9-9c5b-53960a09c527] Running
	I1115 11:48:04.100015  777983 system_pods.go:61] "storage-provisioner" [7b6e6bb5-e4cf-486d-bfc3-d07a3848e221] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:48:04.100034  777983 system_pods.go:74] duration metric: took 21.259984ms to wait for pod list to return data ...
	I1115 11:48:04.100048  777983 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:48:04.119511  777983 default_sa.go:45] found service account: "default"
	I1115 11:48:04.119540  777983 default_sa.go:55] duration metric: took 19.484826ms for default service account to be created ...
	I1115 11:48:04.119550  777983 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:48:04.200475  777983 system_pods.go:86] 8 kube-system pods found
	I1115 11:48:04.200519  777983 system_pods.go:89] "coredns-66bc5c9577-2l449" [5e943487-c90a-4a5d-8954-6d44870ececc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:48:04.200527  777983 system_pods.go:89] "etcd-embed-certs-404149" [061e2652-8536-4564-bd3c-aa1d961acc3d] Running
	I1115 11:48:04.200533  777983 system_pods.go:89] "kindnet-qsvh7" [65b3cd6e-66ac-4934-91d3-16fdc27af287] Running
	I1115 11:48:04.200538  777983 system_pods.go:89] "kube-apiserver-embed-certs-404149" [df336c4d-f7c7-4ec6-98d8-dc1aef88cea7] Running
	I1115 11:48:04.200543  777983 system_pods.go:89] "kube-controller-manager-embed-certs-404149" [cb5308c4-97af-4752-9cd2-856eb8d915fb] Running
	I1115 11:48:04.200547  777983 system_pods.go:89] "kube-proxy-5d2lb" [be30c5c3-f080-4721-b6d8-2f18f7736abe] Running
	I1115 11:48:04.200551  777983 system_pods.go:89] "kube-scheduler-embed-certs-404149" [808c1b05-090a-4dd9-9c5b-53960a09c527] Running
	I1115 11:48:04.200565  777983 system_pods.go:89] "storage-provisioner" [7b6e6bb5-e4cf-486d-bfc3-d07a3848e221] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:48:04.200593  777983 retry.go:31] will retry after 294.727192ms: missing components: kube-dns
	I1115 11:48:04.499718  777983 system_pods.go:86] 8 kube-system pods found
	I1115 11:48:04.499763  777983 system_pods.go:89] "coredns-66bc5c9577-2l449" [5e943487-c90a-4a5d-8954-6d44870ececc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:48:04.499771  777983 system_pods.go:89] "etcd-embed-certs-404149" [061e2652-8536-4564-bd3c-aa1d961acc3d] Running
	I1115 11:48:04.499777  777983 system_pods.go:89] "kindnet-qsvh7" [65b3cd6e-66ac-4934-91d3-16fdc27af287] Running
	I1115 11:48:04.499782  777983 system_pods.go:89] "kube-apiserver-embed-certs-404149" [df336c4d-f7c7-4ec6-98d8-dc1aef88cea7] Running
	I1115 11:48:04.499786  777983 system_pods.go:89] "kube-controller-manager-embed-certs-404149" [cb5308c4-97af-4752-9cd2-856eb8d915fb] Running
	I1115 11:48:04.499791  777983 system_pods.go:89] "kube-proxy-5d2lb" [be30c5c3-f080-4721-b6d8-2f18f7736abe] Running
	I1115 11:48:04.499795  777983 system_pods.go:89] "kube-scheduler-embed-certs-404149" [808c1b05-090a-4dd9-9c5b-53960a09c527] Running
	I1115 11:48:04.499800  777983 system_pods.go:89] "storage-provisioner" [7b6e6bb5-e4cf-486d-bfc3-d07a3848e221] Running
	I1115 11:48:04.499815  777983 system_pods.go:126] duration metric: took 380.251412ms to wait for k8s-apps to be running ...
	I1115 11:48:04.499827  777983 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:48:04.499890  777983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:48:04.513869  777983 system_svc.go:56] duration metric: took 14.03268ms WaitForService to wait for kubelet
	I1115 11:48:04.513898  777983 kubeadm.go:587] duration metric: took 42.030273068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:48:04.513916  777983 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:48:04.516763  777983 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:48:04.516795  777983 node_conditions.go:123] node cpu capacity is 2
	I1115 11:48:04.516809  777983 node_conditions.go:105] duration metric: took 2.885981ms to run NodePressure ...
	I1115 11:48:04.516822  777983 start.go:242] waiting for startup goroutines ...
	I1115 11:48:04.516831  777983 start.go:247] waiting for cluster config update ...
	I1115 11:48:04.516842  777983 start.go:256] writing updated cluster config ...
	I1115 11:48:04.517160  777983 ssh_runner.go:195] Run: rm -f paused
	I1115 11:48:04.521565  777983 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:48:04.527675  777983 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2l449" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.534344  777983 pod_ready.go:94] pod "coredns-66bc5c9577-2l449" is "Ready"
	I1115 11:48:05.534384  777983 pod_ready.go:86] duration metric: took 1.006683955s for pod "coredns-66bc5c9577-2l449" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.537591  777983 pod_ready.go:83] waiting for pod "etcd-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.542837  777983 pod_ready.go:94] pod "etcd-embed-certs-404149" is "Ready"
	I1115 11:48:05.542877  777983 pod_ready.go:86] duration metric: took 5.258322ms for pod "etcd-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.545508  777983 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.550975  777983 pod_ready.go:94] pod "kube-apiserver-embed-certs-404149" is "Ready"
	I1115 11:48:05.551010  777983 pod_ready.go:86] duration metric: took 5.475507ms for pod "kube-apiserver-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.553795  777983 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.731366  777983 pod_ready.go:94] pod "kube-controller-manager-embed-certs-404149" is "Ready"
	I1115 11:48:05.731411  777983 pod_ready.go:86] duration metric: took 177.589001ms for pod "kube-controller-manager-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:05.932012  777983 pod_ready.go:83] waiting for pod "kube-proxy-5d2lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:06.333688  777983 pod_ready.go:94] pod "kube-proxy-5d2lb" is "Ready"
	I1115 11:48:06.333716  777983 pod_ready.go:86] duration metric: took 401.675735ms for pod "kube-proxy-5d2lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:06.532479  777983 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:06.931266  777983 pod_ready.go:94] pod "kube-scheduler-embed-certs-404149" is "Ready"
	I1115 11:48:06.931337  777983 pod_ready.go:86] duration metric: took 398.82022ms for pod "kube-scheduler-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:06.931365  777983 pod_ready.go:40] duration metric: took 2.409766397s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:48:07.016559  777983 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:48:07.019720  777983 out.go:179] * Done! kubectl is now configured to use "embed-certs-404149" cluster and "default" namespace by default
	W1115 11:48:05.937491  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:08.436223  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:10.438283  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:12.935824  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 11:48:04 embed-certs-404149 crio[837]: time="2025-11-15T11:48:04.14330175Z" level=info msg="Created container 4d0a8eb543de824a39a75fee152bdc800279a9f7c131bbf4e109b6a26998cdcd: kube-system/coredns-66bc5c9577-2l449/coredns" id=f57d3efc-bdb7-45e7-8710-f598f5d61a24 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:48:04 embed-certs-404149 crio[837]: time="2025-11-15T11:48:04.144732527Z" level=info msg="Starting container: 4d0a8eb543de824a39a75fee152bdc800279a9f7c131bbf4e109b6a26998cdcd" id=0ce348a6-8d92-4342-b341-f2b95107d9b3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:48:04 embed-certs-404149 crio[837]: time="2025-11-15T11:48:04.156755544Z" level=info msg="Started container" PID=1746 containerID=4d0a8eb543de824a39a75fee152bdc800279a9f7c131bbf4e109b6a26998cdcd description=kube-system/coredns-66bc5c9577-2l449/coredns id=0ce348a6-8d92-4342-b341-f2b95107d9b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d3af6b11e87466e87b8552c80ce8ce65743aa127051c4abdac1b9497be40d3c
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.653606636Z" level=info msg="Running pod sandbox: default/busybox/POD" id=43c13b85-d6ac-4b6e-9087-10e2a2900ea4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.653677094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.673136287Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aa0f179c785586fb8cf4fcb77a825b6ce95eb436b799955f80a80296934e414c UID:3107e5d8-dcc1-42b2-8764-e2ce45e76676 NetNS:/var/run/netns/b6022f2c-d6c1-478e-a9c6-e741cc58c53b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079118}] Aliases:map[]}"
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.673314553Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.688181573Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aa0f179c785586fb8cf4fcb77a825b6ce95eb436b799955f80a80296934e414c UID:3107e5d8-dcc1-42b2-8764-e2ce45e76676 NetNS:/var/run/netns/b6022f2c-d6c1-478e-a9c6-e741cc58c53b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079118}] Aliases:map[]}"
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.688469206Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.705223155Z" level=info msg="Ran pod sandbox aa0f179c785586fb8cf4fcb77a825b6ce95eb436b799955f80a80296934e414c with infra container: default/busybox/POD" id=43c13b85-d6ac-4b6e-9087-10e2a2900ea4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.708137197Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=889057f5-bf66-4a2a-b120-682c2dd11720 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.708613445Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=889057f5-bf66-4a2a-b120-682c2dd11720 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.71159925Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=889057f5-bf66-4a2a-b120-682c2dd11720 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.717208789Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b82931ba-322d-46af-b5f5-890bdbf4960b name=/runtime.v1.ImageService/PullImage
	Nov 15 11:48:07 embed-certs-404149 crio[837]: time="2025-11-15T11:48:07.721015903Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.226149405Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b82931ba-322d-46af-b5f5-890bdbf4960b name=/runtime.v1.ImageService/PullImage
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.227297299Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=88e28f5e-db83-40fa-ae11-b8b0f2c6b466 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.229525967Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a2757cb-9ab8-42ee-a44d-6ab5210b0445 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.24155668Z" level=info msg="Creating container: default/busybox/busybox" id=5973d884-2148-4362-947e-cf8f2e1b5f16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.241850509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.255018301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.255719035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.281043317Z" level=info msg="Created container 67aa92d518676de5134872912ce85b6eb8a000d465ba1f9abc804b1b97832d36: default/busybox/busybox" id=5973d884-2148-4362-947e-cf8f2e1b5f16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.294433512Z" level=info msg="Starting container: 67aa92d518676de5134872912ce85b6eb8a000d465ba1f9abc804b1b97832d36" id=ca793907-85a9-457b-8a64-59215980269d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:48:10 embed-certs-404149 crio[837]: time="2025-11-15T11:48:10.2967634Z" level=info msg="Started container" PID=1800 containerID=67aa92d518676de5134872912ce85b6eb8a000d465ba1f9abc804b1b97832d36 description=default/busybox/busybox id=ca793907-85a9-457b-8a64-59215980269d name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa0f179c785586fb8cf4fcb77a825b6ce95eb436b799955f80a80296934e414c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	67aa92d518676       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   aa0f179c78558       busybox                                      default
	4d0a8eb543de8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   7d3af6b11e874       coredns-66bc5c9577-2l449                     kube-system
	6de3e1eb7668d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   8355d2f3d8df0       storage-provisioner                          kube-system
	ef5376186d733       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   daffb15914fb9       kindnet-qsvh7                                kube-system
	e4aaf25cf2da0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   7113b1e9a1182       kube-proxy-5d2lb                             kube-system
	d957b8ea25a1e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6446ce16f2f63       kube-apiserver-embed-certs-404149            kube-system
	8179b0a575eaf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   b95b9c642721f       etcd-embed-certs-404149                      kube-system
	3cbe93bf5db01       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   8dfd71d057970       kube-controller-manager-embed-certs-404149   kube-system
	a6bc28d044fd2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   ff8ba61962156       kube-scheduler-embed-certs-404149            kube-system
	
	
	==> coredns [4d0a8eb543de824a39a75fee152bdc800279a9f7c131bbf4e109b6a26998cdcd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44394 - 25442 "HINFO IN 8776341027564425760.682071048202703819. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016311212s
	
	
	==> describe nodes <==
	Name:               embed-certs-404149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-404149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=embed-certs-404149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_47_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:47:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-404149
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:48:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:48:08 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:48:08 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:48:08 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:48:08 +0000   Sat, 15 Nov 2025 11:48:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-404149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                e5de80db-1b6a-4760-801b-d0fd814d39f6
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-2l449                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-404149                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-qsvh7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-404149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-404149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-5d2lb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-404149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 54s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s   kubelet          Node embed-certs-404149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s   kubelet          Node embed-certs-404149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s   kubelet          Node embed-certs-404149 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node embed-certs-404149 event: Registered Node embed-certs-404149 in Controller
	  Normal   NodeReady                15s   kubelet          Node embed-certs-404149 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 11:24] overlayfs: idmapped layers are currently not supported
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8179b0a575eaf89c78006bb9390b369ad604a9e3684eb157a297d7778988b243] <==
	{"level":"warn","ts":"2025-11-15T11:47:12.719163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.741722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.757688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.775248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.802314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.828241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.829712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.845266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.856778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.876083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.891763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.914089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.925055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.947117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.963897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:12.975790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.024185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.049651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.074179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.085567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.130367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.157784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.175853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.197317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:13.259845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:48:18 up  3:30,  0 user,  load average: 2.48, 3.13, 2.78
	Linux embed-certs-404149 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ef5376186d733955ba43ce1186e7faa971dd0dd12b4cfa48f5e086fabea63007] <==
	I1115 11:47:23.000751       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:47:23.001250       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 11:47:23.001401       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:47:23.001416       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:47:23.001432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:47:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:47:23.291180       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:47:23.291198       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:47:23.291206       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:47:23.291375       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:47:53.291307       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:47:53.291458       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:47:53.291613       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:47:53.291735       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 11:47:54.891675       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:47:54.891766       1 metrics.go:72] Registering metrics
	I1115 11:47:54.891850       1 controller.go:711] "Syncing nftables rules"
	I1115 11:48:03.297691       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 11:48:03.297797       1 main.go:301] handling current node
	I1115 11:48:13.290758       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 11:48:13.290808       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d957b8ea25a1ef68a163bfa46536cd2a871a519a92bc5a7156d462b10f92e126] <==
	I1115 11:47:14.300146       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:47:14.300239       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:47:14.336254       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:47:14.349063       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:47:14.350652       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 11:47:14.369060       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:47:14.379274       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:47:14.980589       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 11:47:14.986996       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 11:47:14.987023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:47:15.850309       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:47:15.904105       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:47:15.983469       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 11:47:15.991472       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1115 11:47:15.992757       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:47:15.998544       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:47:16.227131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:47:17.071372       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:47:17.089114       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 11:47:17.099379       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 11:47:21.575157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:47:22.234008       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:47:22.239146       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:47:22.377570       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1115 11:48:16.456185       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:59148: use of closed network connection
	
	
	==> kube-controller-manager [3cbe93bf5db01bf4679032337a53278b0be3d5000ee17784f8fd5ebcb410264e] <==
	I1115 11:47:21.269900       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-404149"
	I1115 11:47:21.269952       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:47:21.270408       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:47:21.270992       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:47:21.288505       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 11:47:21.288599       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:47:21.296753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 11:47:21.296840       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:47:21.297149       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 11:47:21.297393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:47:21.297479       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 11:47:21.297522       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:47:21.297547       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:47:21.297575       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:47:21.297589       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 11:47:21.298012       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:47:21.298054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 11:47:21.298083       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:47:21.298098       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 11:47:21.298152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:47:21.319668       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:47:21.320307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:47:21.320320       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:47:21.320327       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:48:06.277225       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e4aaf25cf2da04e4f43e0229d993762b6a9ce57d7a94c9320f5c37c61b8edabd] <==
	I1115 11:47:23.072308       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:47:23.239447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:47:23.345019       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:47:23.345065       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 11:47:23.345143       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:47:23.408176       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:47:23.410631       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:47:23.423495       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:47:23.424064       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:47:23.424092       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:47:23.438730       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:47:23.438751       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:47:23.439084       1 config.go:200] "Starting service config controller"
	I1115 11:47:23.439093       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:47:23.439445       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:47:23.439454       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:47:23.439897       1 config.go:309] "Starting node config controller"
	I1115 11:47:23.439904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:47:23.439915       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:47:23.539540       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:47:23.543853       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:47:23.543896       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a6bc28d044fd23528030c87213a5f404236d7ed9af832141cfea60a62214ff11] <==
	I1115 11:47:14.816674       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:47:14.819219       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:47:14.819331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:47:14.819359       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:47:14.819376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 11:47:14.823557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:47:14.823722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:47:14.823838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:47:14.823931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:47:14.824025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:47:14.825016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:47:14.829348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:47:14.829559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:47:14.829666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:47:14.830660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:47:14.832821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:47:14.833089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:47:14.833185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:47:14.833235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:47:14.833270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:47:14.833307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:47:14.833349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:47:14.833445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:47:14.833500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 11:47:16.120306       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:47:18 embed-certs-404149 kubelet[1318]: I1115 11:47:18.078355    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-404149" podStartSLOduration=1.078337319 podStartE2EDuration="1.078337319s" podCreationTimestamp="2025-11-15 11:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:47:18.040536657 +0000 UTC m=+1.183013860" watchObservedRunningTime="2025-11-15 11:47:18.078337319 +0000 UTC m=+1.220814522"
	Nov 15 11:47:18 embed-certs-404149 kubelet[1318]: E1115 11:47:18.084486    1318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-404149\" already exists" pod="kube-system/etcd-embed-certs-404149"
	Nov 15 11:47:18 embed-certs-404149 kubelet[1318]: I1115 11:47:18.107537    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-404149" podStartSLOduration=3.107521679 podStartE2EDuration="3.107521679s" podCreationTimestamp="2025-11-15 11:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:47:18.07824227 +0000 UTC m=+1.220719497" watchObservedRunningTime="2025-11-15 11:47:18.107521679 +0000 UTC m=+1.249998874"
	Nov 15 11:47:21 embed-certs-404149 kubelet[1318]: I1115 11:47:21.292698    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 11:47:21 embed-certs-404149 kubelet[1318]: I1115 11:47:21.294503    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.511738    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be30c5c3-f080-4721-b6d8-2f18f7736abe-xtables-lock\") pod \"kube-proxy-5d2lb\" (UID: \"be30c5c3-f080-4721-b6d8-2f18f7736abe\") " pod="kube-system/kube-proxy-5d2lb"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.511794    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4859l\" (UniqueName: \"kubernetes.io/projected/be30c5c3-f080-4721-b6d8-2f18f7736abe-kube-api-access-4859l\") pod \"kube-proxy-5d2lb\" (UID: \"be30c5c3-f080-4721-b6d8-2f18f7736abe\") " pod="kube-system/kube-proxy-5d2lb"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.511819    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5mlf\" (UniqueName: \"kubernetes.io/projected/65b3cd6e-66ac-4934-91d3-16fdc27af287-kube-api-access-g5mlf\") pod \"kindnet-qsvh7\" (UID: \"65b3cd6e-66ac-4934-91d3-16fdc27af287\") " pod="kube-system/kindnet-qsvh7"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.511928    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be30c5c3-f080-4721-b6d8-2f18f7736abe-kube-proxy\") pod \"kube-proxy-5d2lb\" (UID: \"be30c5c3-f080-4721-b6d8-2f18f7736abe\") " pod="kube-system/kube-proxy-5d2lb"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.511963    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/65b3cd6e-66ac-4934-91d3-16fdc27af287-cni-cfg\") pod \"kindnet-qsvh7\" (UID: \"65b3cd6e-66ac-4934-91d3-16fdc27af287\") " pod="kube-system/kindnet-qsvh7"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.511980    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65b3cd6e-66ac-4934-91d3-16fdc27af287-lib-modules\") pod \"kindnet-qsvh7\" (UID: \"65b3cd6e-66ac-4934-91d3-16fdc27af287\") " pod="kube-system/kindnet-qsvh7"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.511997    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be30c5c3-f080-4721-b6d8-2f18f7736abe-lib-modules\") pod \"kube-proxy-5d2lb\" (UID: \"be30c5c3-f080-4721-b6d8-2f18f7736abe\") " pod="kube-system/kube-proxy-5d2lb"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.512014    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65b3cd6e-66ac-4934-91d3-16fdc27af287-xtables-lock\") pod \"kindnet-qsvh7\" (UID: \"65b3cd6e-66ac-4934-91d3-16fdc27af287\") " pod="kube-system/kindnet-qsvh7"
	Nov 15 11:47:22 embed-certs-404149 kubelet[1318]: I1115 11:47:22.701725    1318 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:47:23 embed-certs-404149 kubelet[1318]: I1115 11:47:23.155351    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qsvh7" podStartSLOduration=1.155330377 podStartE2EDuration="1.155330377s" podCreationTimestamp="2025-11-15 11:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:47:23.104694142 +0000 UTC m=+6.247171345" watchObservedRunningTime="2025-11-15 11:47:23.155330377 +0000 UTC m=+6.297807589"
	Nov 15 11:47:23 embed-certs-404149 kubelet[1318]: I1115 11:47:23.261375    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5d2lb" podStartSLOduration=1.26135481 podStartE2EDuration="1.26135481s" podCreationTimestamp="2025-11-15 11:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:47:23.155987509 +0000 UTC m=+6.298464704" watchObservedRunningTime="2025-11-15 11:47:23.26135481 +0000 UTC m=+6.403832005"
	Nov 15 11:48:03 embed-certs-404149 kubelet[1318]: I1115 11:48:03.626288    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 11:48:03 embed-certs-404149 kubelet[1318]: I1115 11:48:03.777933    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e943487-c90a-4a5d-8954-6d44870ececc-config-volume\") pod \"coredns-66bc5c9577-2l449\" (UID: \"5e943487-c90a-4a5d-8954-6d44870ececc\") " pod="kube-system/coredns-66bc5c9577-2l449"
	Nov 15 11:48:03 embed-certs-404149 kubelet[1318]: I1115 11:48:03.778199    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7s9n\" (UniqueName: \"kubernetes.io/projected/5e943487-c90a-4a5d-8954-6d44870ececc-kube-api-access-v7s9n\") pod \"coredns-66bc5c9577-2l449\" (UID: \"5e943487-c90a-4a5d-8954-6d44870ececc\") " pod="kube-system/coredns-66bc5c9577-2l449"
	Nov 15 11:48:03 embed-certs-404149 kubelet[1318]: I1115 11:48:03.778300    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7b6e6bb5-e4cf-486d-bfc3-d07a3848e221-tmp\") pod \"storage-provisioner\" (UID: \"7b6e6bb5-e4cf-486d-bfc3-d07a3848e221\") " pod="kube-system/storage-provisioner"
	Nov 15 11:48:03 embed-certs-404149 kubelet[1318]: I1115 11:48:03.778372    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xbz\" (UniqueName: \"kubernetes.io/projected/7b6e6bb5-e4cf-486d-bfc3-d07a3848e221-kube-api-access-96xbz\") pod \"storage-provisioner\" (UID: \"7b6e6bb5-e4cf-486d-bfc3-d07a3848e221\") " pod="kube-system/storage-provisioner"
	Nov 15 11:48:04 embed-certs-404149 kubelet[1318]: I1115 11:48:04.320686    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2l449" podStartSLOduration=42.320668711 podStartE2EDuration="42.320668711s" podCreationTimestamp="2025-11-15 11:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:48:04.258586878 +0000 UTC m=+47.401064081" watchObservedRunningTime="2025-11-15 11:48:04.320668711 +0000 UTC m=+47.463145905"
	Nov 15 11:48:05 embed-certs-404149 kubelet[1318]: I1115 11:48:05.227531    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.227511854 podStartE2EDuration="42.227511854s" podCreationTimestamp="2025-11-15 11:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:48:04.325439036 +0000 UTC m=+47.467916239" watchObservedRunningTime="2025-11-15 11:48:05.227511854 +0000 UTC m=+48.369989057"
	Nov 15 11:48:07 embed-certs-404149 kubelet[1318]: I1115 11:48:07.514702    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl9pv\" (UniqueName: \"kubernetes.io/projected/3107e5d8-dcc1-42b2-8764-e2ce45e76676-kube-api-access-nl9pv\") pod \"busybox\" (UID: \"3107e5d8-dcc1-42b2-8764-e2ce45e76676\") " pod="default/busybox"
	Nov 15 11:48:07 embed-certs-404149 kubelet[1318]: W1115 11:48:07.697288    1318 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/crio-aa0f179c785586fb8cf4fcb77a825b6ce95eb436b799955f80a80296934e414c WatchSource:0}: Error finding container aa0f179c785586fb8cf4fcb77a825b6ce95eb436b799955f80a80296934e414c: Status 404 returned error can't find the container with id aa0f179c785586fb8cf4fcb77a825b6ce95eb436b799955f80a80296934e414c
	
	
	==> storage-provisioner [6de3e1eb7668d4c4bf3a45d5ca77ba9ab02d74bf99d2b2df862eac917ada46a0] <==
	I1115 11:48:04.115131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:48:04.153657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:48:04.153712       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 11:48:04.158333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:04.182639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:48:04.205727       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ea4f04f-64df-44af-afb1-3382b56ac68d", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-404149_e86a44d0-16b1-4906-a271-6eb45f6c0793 became leader
	I1115 11:48:04.205765       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:48:04.205857       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-404149_e86a44d0-16b1-4906-a271-6eb45f6c0793!
	W1115 11:48:04.252771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:48:04.309860       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-404149_e86a44d0-16b1-4906-a271-6eb45f6c0793!
	W1115 11:48:04.326277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:06.329988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:06.339909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:08.343982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:08.348762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:10.351890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:10.357617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:12.360163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:12.364423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:14.368578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:14.375202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:16.378383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:16.383398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:18.387559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:18.399880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-404149 -n embed-certs-404149
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-404149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-769461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-769461 --alsologtostderr -v=1: exit status 80 (2.327555973s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-769461 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:48:55.105954  786441 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:48:55.106048  786441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:48:55.106054  786441 out.go:374] Setting ErrFile to fd 2...
	I1115 11:48:55.106059  786441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:48:55.106416  786441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:48:55.106707  786441 out.go:368] Setting JSON to false
	I1115 11:48:55.106724  786441 mustload.go:66] Loading cluster: default-k8s-diff-port-769461
	I1115 11:48:55.107552  786441 config.go:182] Loaded profile config "default-k8s-diff-port-769461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:48:55.108220  786441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-769461 --format={{.State.Status}}
	I1115 11:48:55.129342  786441 host.go:66] Checking if "default-k8s-diff-port-769461" exists ...
	I1115 11:48:55.129681  786441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:48:55.238243  786441 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-15 11:48:55.220492584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:48:55.238922  786441 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-769461 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 11:48:55.243243  786441 out.go:179] * Pausing node default-k8s-diff-port-769461 ... 
	I1115 11:48:55.246180  786441 host.go:66] Checking if "default-k8s-diff-port-769461" exists ...
	I1115 11:48:55.246717  786441 ssh_runner.go:195] Run: systemctl --version
	I1115 11:48:55.246771  786441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-769461
	I1115 11:48:55.274740  786441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/default-k8s-diff-port-769461/id_rsa Username:docker}
	I1115 11:48:55.393003  786441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:48:55.409563  786441 pause.go:52] kubelet running: true
	I1115 11:48:55.409652  786441 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:48:55.797647  786441 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:48:55.797734  786441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:48:55.892988  786441 cri.go:89] found id: "a6a07662328b4265eb840a2cc587982ae3774637d07cd67bc54699170e319aab"
	I1115 11:48:55.893070  786441 cri.go:89] found id: "1cfdbec99bdb1d48554aa742e63c6b88cb1485331ece237fddfb8403fadc953f"
	I1115 11:48:55.893096  786441 cri.go:89] found id: "096615ff4762fd1030ea22975fbda2deeafa29564f3d4a4bc42cb7213d7bca2e"
	I1115 11:48:55.893116  786441 cri.go:89] found id: "339704fd3e18f7555facc0bf0fdf7754a2f2d41f8760e86bd1a5494e1c73869d"
	I1115 11:48:55.893145  786441 cri.go:89] found id: "71751d3ff5736d4c1ddda1fdd64370dbafe7788b817bada80da23c901e7a380a"
	I1115 11:48:55.893204  786441 cri.go:89] found id: "58a8cafbd658243739209adc98b5cca4fb51708fc98f57d93b11c6d97859707b"
	I1115 11:48:55.893232  786441 cri.go:89] found id: "c28f3e68692e829f48e01931512e3679a6223533e56ed8f074c9d056fafd4609"
	I1115 11:48:55.893248  786441 cri.go:89] found id: "1222b8dec2b50ece8a4af1cb27e223b6a0079f14fc1c5ecf88240ddba9fe0ee0"
	I1115 11:48:55.893266  786441 cri.go:89] found id: "faf86f2f211634e1d17c6370364e838bc04fe0108542f93851f68044cecfe2f9"
	I1115 11:48:55.893307  786441 cri.go:89] found id: "c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207"
	I1115 11:48:55.893325  786441 cri.go:89] found id: "ba6623797a0ea7a24e6b56ca1b002092d0fb220ca803cf4677a6087af7eee357"
	I1115 11:48:55.893344  786441 cri.go:89] found id: ""
	I1115 11:48:55.893435  786441 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:48:55.918757  786441 retry.go:31] will retry after 237.981885ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:48:55Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:48:56.157205  786441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:48:56.172124  786441 pause.go:52] kubelet running: false
	I1115 11:48:56.172189  786441 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:48:56.420055  786441 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:48:56.420133  786441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:48:56.503256  786441 cri.go:89] found id: "a6a07662328b4265eb840a2cc587982ae3774637d07cd67bc54699170e319aab"
	I1115 11:48:56.503325  786441 cri.go:89] found id: "1cfdbec99bdb1d48554aa742e63c6b88cb1485331ece237fddfb8403fadc953f"
	I1115 11:48:56.503343  786441 cri.go:89] found id: "096615ff4762fd1030ea22975fbda2deeafa29564f3d4a4bc42cb7213d7bca2e"
	I1115 11:48:56.503361  786441 cri.go:89] found id: "339704fd3e18f7555facc0bf0fdf7754a2f2d41f8760e86bd1a5494e1c73869d"
	I1115 11:48:56.503378  786441 cri.go:89] found id: "71751d3ff5736d4c1ddda1fdd64370dbafe7788b817bada80da23c901e7a380a"
	I1115 11:48:56.503410  786441 cri.go:89] found id: "58a8cafbd658243739209adc98b5cca4fb51708fc98f57d93b11c6d97859707b"
	I1115 11:48:56.503431  786441 cri.go:89] found id: "c28f3e68692e829f48e01931512e3679a6223533e56ed8f074c9d056fafd4609"
	I1115 11:48:56.503448  786441 cri.go:89] found id: "1222b8dec2b50ece8a4af1cb27e223b6a0079f14fc1c5ecf88240ddba9fe0ee0"
	I1115 11:48:56.503465  786441 cri.go:89] found id: "faf86f2f211634e1d17c6370364e838bc04fe0108542f93851f68044cecfe2f9"
	I1115 11:48:56.503499  786441 cri.go:89] found id: "c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207"
	I1115 11:48:56.503521  786441 cri.go:89] found id: "ba6623797a0ea7a24e6b56ca1b002092d0fb220ca803cf4677a6087af7eee357"
	I1115 11:48:56.503539  786441 cri.go:89] found id: ""
	I1115 11:48:56.503627  786441 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:48:56.525834  786441 retry.go:31] will retry after 370.060876ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:48:56Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:48:56.896217  786441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:48:56.913998  786441 pause.go:52] kubelet running: false
	I1115 11:48:56.914138  786441 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:48:57.150357  786441 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:48:57.150488  786441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:48:57.289265  786441 cri.go:89] found id: "a6a07662328b4265eb840a2cc587982ae3774637d07cd67bc54699170e319aab"
	I1115 11:48:57.289335  786441 cri.go:89] found id: "1cfdbec99bdb1d48554aa742e63c6b88cb1485331ece237fddfb8403fadc953f"
	I1115 11:48:57.289355  786441 cri.go:89] found id: "096615ff4762fd1030ea22975fbda2deeafa29564f3d4a4bc42cb7213d7bca2e"
	I1115 11:48:57.289375  786441 cri.go:89] found id: "339704fd3e18f7555facc0bf0fdf7754a2f2d41f8760e86bd1a5494e1c73869d"
	I1115 11:48:57.289407  786441 cri.go:89] found id: "71751d3ff5736d4c1ddda1fdd64370dbafe7788b817bada80da23c901e7a380a"
	I1115 11:48:57.289427  786441 cri.go:89] found id: "58a8cafbd658243739209adc98b5cca4fb51708fc98f57d93b11c6d97859707b"
	I1115 11:48:57.289443  786441 cri.go:89] found id: "c28f3e68692e829f48e01931512e3679a6223533e56ed8f074c9d056fafd4609"
	I1115 11:48:57.289461  786441 cri.go:89] found id: "1222b8dec2b50ece8a4af1cb27e223b6a0079f14fc1c5ecf88240ddba9fe0ee0"
	I1115 11:48:57.289499  786441 cri.go:89] found id: "faf86f2f211634e1d17c6370364e838bc04fe0108542f93851f68044cecfe2f9"
	I1115 11:48:57.289529  786441 cri.go:89] found id: "c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207"
	I1115 11:48:57.289546  786441 cri.go:89] found id: "ba6623797a0ea7a24e6b56ca1b002092d0fb220ca803cf4677a6087af7eee357"
	I1115 11:48:57.289577  786441 cri.go:89] found id: ""
	I1115 11:48:57.289658  786441 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:48:57.328005  786441 out.go:203] 
	W1115 11:48:57.336445  786441 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:48:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:48:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 11:48:57.336476  786441 out.go:285] * 
	* 
	W1115 11:48:57.343310  786441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 11:48:57.349570  786441 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-769461 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-769461
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-769461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054",
	        "Created": "2025-11-15T11:46:05.665660971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 781446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:47:44.765651606Z",
	            "FinishedAt": "2025-11-15T11:47:43.91167659Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/hostname",
	        "HostsPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/hosts",
	        "LogPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054-json.log",
	        "Name": "/default-k8s-diff-port-769461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-769461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-769461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054",
	                "LowerDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-769461",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-769461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-769461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-769461",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-769461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0133bc8e0970008bcf663efd77e35b231bdde0d40bdc8ff2779ca6d99568e6ed",
	            "SandboxKey": "/var/run/docker/netns/0133bc8e0970",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-769461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:43:63:8b:85:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "97f28ee3e21c22cb67f771931d4a0c5ff8297079a2da7de0d16d0518cb24266f",
	                    "EndpointID": "19fb2afd474ed42ee7c2a9482a51d9e9ed1d9a222adb5b13668c258a7a92d17d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-769461",
	                        "6bc3c2610e90"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461: exit status 2 (470.725302ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-769461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-769461 logs -n 25: (2.003011954s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-303284 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p cert-options-303284                                                                                                                                                                                                                        │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │                     │
	│ stop    │ -p old-k8s-version-872969 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-872969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-769461 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:48:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:48:31.334673  784287 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:48:31.334869  784287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:48:31.334899  784287 out.go:374] Setting ErrFile to fd 2...
	I1115 11:48:31.334926  784287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:48:31.335590  784287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:48:31.336044  784287 out.go:368] Setting JSON to false
	I1115 11:48:31.337158  784287 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12662,"bootTime":1763194649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:48:31.337261  784287 start.go:143] virtualization:  
	I1115 11:48:31.342177  784287 out.go:179] * [embed-certs-404149] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:48:31.345186  784287 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:48:31.345322  784287 notify.go:221] Checking for updates...
	I1115 11:48:31.351046  784287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:48:31.353910  784287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:48:31.356707  784287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:48:31.359463  784287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:48:31.362347  784287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:48:31.365972  784287 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:48:31.366611  784287 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:48:31.401420  784287 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:48:31.401616  784287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:48:31.465390  784287 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:48:31.455091536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:48:31.465505  784287 docker.go:319] overlay module found
	I1115 11:48:31.468901  784287 out.go:179] * Using the docker driver based on existing profile
	I1115 11:48:31.471896  784287 start.go:309] selected driver: docker
	I1115 11:48:31.471920  784287 start.go:930] validating driver "docker" against &{Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:48:31.472021  784287 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:48:31.472782  784287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:48:31.551803  784287 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:48:31.542417248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:48:31.552155  784287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:48:31.552189  784287 cni.go:84] Creating CNI manager for ""
	I1115 11:48:31.552248  784287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:48:31.552294  784287 start.go:353] cluster config:
	{Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:48:31.555438  784287 out.go:179] * Starting "embed-certs-404149" primary control-plane node in "embed-certs-404149" cluster
	I1115 11:48:31.558267  784287 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:48:31.561151  784287 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:48:31.564118  784287 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:48:31.564107  784287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:48:31.564165  784287 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:48:31.564175  784287 cache.go:65] Caching tarball of preloaded images
	I1115 11:48:31.564281  784287 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:48:31.564290  784287 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:48:31.564421  784287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/config.json ...
	I1115 11:48:31.584043  784287 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:48:31.584067  784287 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:48:31.584081  784287 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:48:31.584106  784287 start.go:360] acquireMachinesLock for embed-certs-404149: {Name:mka215e00af293eebe84cec598dbc8661faf4dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:48:31.584171  784287 start.go:364] duration metric: took 36.284µs to acquireMachinesLock for "embed-certs-404149"
	I1115 11:48:31.584194  784287 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:48:31.584200  784287 fix.go:54] fixHost starting: 
	I1115 11:48:31.584452  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:31.602115  784287 fix.go:112] recreateIfNeeded on embed-certs-404149: state=Stopped err=<nil>
	W1115 11:48:31.602148  784287 fix.go:138] unexpected machine state, will restart: <nil>
	W1115 11:48:29.934856  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:31.938522  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:34.435093  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	I1115 11:48:31.605367  784287 out.go:252] * Restarting existing docker container for "embed-certs-404149" ...
	I1115 11:48:31.605453  784287 cli_runner.go:164] Run: docker start embed-certs-404149
	I1115 11:48:31.859820  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:31.882017  784287 kic.go:430] container "embed-certs-404149" state is running.
	I1115 11:48:31.882510  784287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:48:31.909007  784287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/config.json ...
	I1115 11:48:31.909346  784287 machine.go:94] provisionDockerMachine start ...
	I1115 11:48:31.909509  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:31.943317  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:31.943640  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:31.943649  784287 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:48:31.944286  784287 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:48:35.108950  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-404149
	
	I1115 11:48:35.108978  784287 ubuntu.go:182] provisioning hostname "embed-certs-404149"
	I1115 11:48:35.109049  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.127351  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:35.127696  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:35.127713  784287 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-404149 && echo "embed-certs-404149" | sudo tee /etc/hostname
	I1115 11:48:35.292425  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-404149
	
	I1115 11:48:35.292544  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.312761  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:35.313119  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:35.313151  784287 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-404149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-404149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-404149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:48:35.465096  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:48:35.465124  784287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:48:35.465155  784287 ubuntu.go:190] setting up certificates
	I1115 11:48:35.465165  784287 provision.go:84] configureAuth start
	I1115 11:48:35.465223  784287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:48:35.482113  784287 provision.go:143] copyHostCerts
	I1115 11:48:35.482182  784287 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:48:35.482202  784287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:48:35.482282  784287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:48:35.482378  784287 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:48:35.482389  784287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:48:35.482415  784287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:48:35.482472  784287 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:48:35.482481  784287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:48:35.482506  784287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:48:35.482563  784287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.embed-certs-404149 san=[127.0.0.1 192.168.76.2 embed-certs-404149 localhost minikube]
	I1115 11:48:35.569528  784287 provision.go:177] copyRemoteCerts
	I1115 11:48:35.569597  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:48:35.569643  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.589878  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:35.697861  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:48:35.719325  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:48:35.739274  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:48:35.757531  784287 provision.go:87] duration metric: took 292.349901ms to configureAuth
	I1115 11:48:35.757565  784287 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:48:35.757764  784287 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:48:35.757870  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.775038  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:35.775344  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:35.775360  784287 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:48:36.123616  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:48:36.123638  784287 machine.go:97] duration metric: took 4.214272552s to provisionDockerMachine
	I1115 11:48:36.123650  784287 start.go:293] postStartSetup for "embed-certs-404149" (driver="docker")
	I1115 11:48:36.123661  784287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:48:36.123722  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:48:36.123759  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.146248  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.253488  784287 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:48:36.256895  784287 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:48:36.256922  784287 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:48:36.256934  784287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:48:36.256995  784287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:48:36.257091  784287 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:48:36.257244  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:48:36.265208  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:48:36.283527  784287 start.go:296] duration metric: took 159.861565ms for postStartSetup
	I1115 11:48:36.283654  784287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:48:36.283748  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.301208  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.406516  784287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:48:36.411550  784287 fix.go:56] duration metric: took 4.827342425s for fixHost
	I1115 11:48:36.411585  784287 start.go:83] releasing machines lock for "embed-certs-404149", held for 4.827401215s
	I1115 11:48:36.411651  784287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:48:36.429341  784287 ssh_runner.go:195] Run: cat /version.json
	I1115 11:48:36.429412  784287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:48:36.429474  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.429416  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.451457  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.455674  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.665809  784287 ssh_runner.go:195] Run: systemctl --version
	I1115 11:48:36.672597  784287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:48:36.710201  784287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:48:36.714721  784287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:48:36.714824  784287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:48:36.724601  784287 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:48:36.724627  784287 start.go:496] detecting cgroup driver to use...
	I1115 11:48:36.724680  784287 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:48:36.724736  784287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:48:36.740345  784287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:48:36.753898  784287 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:48:36.753989  784287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:48:36.771070  784287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:48:36.787158  784287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:48:36.918533  784287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:48:37.040468  784287 docker.go:234] disabling docker service ...
	I1115 11:48:37.040569  784287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:48:37.056160  784287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:48:37.069487  784287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:48:37.188055  784287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:48:37.300164  784287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:48:37.313658  784287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:48:37.328848  784287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:48:37.328948  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.338985  784287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:48:37.339054  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.347886  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.356799  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.366123  784287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:48:37.374481  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.383310  784287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.391315  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.399966  784287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:48:37.409405  784287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:48:37.416729  784287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:48:37.542347  784287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:48:37.682070  784287 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:48:37.682144  784287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:48:37.686262  784287 start.go:564] Will wait 60s for crictl version
	I1115 11:48:37.686329  784287 ssh_runner.go:195] Run: which crictl
	I1115 11:48:37.689929  784287 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:48:37.714851  784287 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:48:37.714935  784287 ssh_runner.go:195] Run: crio --version
	I1115 11:48:37.744556  784287 ssh_runner.go:195] Run: crio --version
	I1115 11:48:37.780282  784287 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:48:37.783221  784287 cli_runner.go:164] Run: docker network inspect embed-certs-404149 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:48:37.799913  784287 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:48:37.804136  784287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:48:37.815015  784287 kubeadm.go:884] updating cluster {Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:48:37.815139  784287 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:48:37.815198  784287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:48:37.848473  784287 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:48:37.848500  784287 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:48:37.848557  784287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:48:37.879776  784287 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:48:37.879799  784287 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:48:37.879807  784287 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:48:37.879912  784287 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-404149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:48:37.879987  784287 ssh_runner.go:195] Run: crio config
	I1115 11:48:37.967367  784287 cni.go:84] Creating CNI manager for ""
	I1115 11:48:37.967430  784287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:48:37.967466  784287 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:48:37.967504  784287 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-404149 NodeName:embed-certs-404149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:48:37.967662  784287 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-404149"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:48:37.967758  784287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:48:37.978086  784287 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:48:37.978210  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:48:37.986101  784287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 11:48:38.002334  784287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:48:38.021586  784287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 11:48:38.039446  784287 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:48:38.045007  784287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:48:38.057493  784287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:48:38.178626  784287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:48:38.194187  784287 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149 for IP: 192.168.76.2
	I1115 11:48:38.194209  784287 certs.go:195] generating shared ca certs ...
	I1115 11:48:38.194226  784287 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:38.194368  784287 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:48:38.194432  784287 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:48:38.194446  784287 certs.go:257] generating profile certs ...
	I1115 11:48:38.194541  784287 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.key
	I1115 11:48:38.194611  784287 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key.feb77388
	I1115 11:48:38.194654  784287 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key
	I1115 11:48:38.194766  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:48:38.194799  784287 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:48:38.194812  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:48:38.194841  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:48:38.194866  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:48:38.194891  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:48:38.194934  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:48:38.195589  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:48:38.218514  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:48:38.238898  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:48:38.259623  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:48:38.280826  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 11:48:38.302053  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:48:38.322361  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:48:38.352143  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 11:48:38.376887  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:48:38.398465  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:48:38.426379  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:48:38.448490  784287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:48:38.464129  784287 ssh_runner.go:195] Run: openssl version
	I1115 11:48:38.472126  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:48:38.481577  784287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:48:38.486689  784287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:48:38.486754  784287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:48:38.533900  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:48:38.541960  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:48:38.550370  784287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:48:38.555052  784287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:48:38.555124  784287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:48:38.596315  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:48:38.604179  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:48:38.612361  784287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:48:38.616111  784287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:48:38.616173  784287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:48:38.657496  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:48:38.665619  784287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:48:38.670071  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:48:38.716095  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:48:38.766741  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:48:38.812823  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:48:38.856654  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:48:38.918678  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:48:39.061363  784287 kubeadm.go:401] StartCluster: {Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:48:39.061470  784287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:48:39.061584  784287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:48:39.132441  784287 cri.go:89] found id: ""
	I1115 11:48:39.132555  784287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:48:39.148317  784287 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:48:39.148348  784287 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:48:39.148446  784287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:48:39.160273  784287 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:48:39.161021  784287 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-404149" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:48:39.161362  784287 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-404149" cluster setting kubeconfig missing "embed-certs-404149" context setting]
	I1115 11:48:39.161942  784287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:39.164113  784287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:48:39.188651  784287 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 11:48:39.188697  784287 kubeadm.go:602] duration metric: took 40.323417ms to restartPrimaryControlPlane
	I1115 11:48:39.188713  784287 kubeadm.go:403] duration metric: took 127.365595ms to StartCluster
	I1115 11:48:39.188729  784287 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:39.188814  784287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:48:39.190736  784287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:39.190999  784287 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:48:39.191403  784287 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:48:39.191399  784287 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:48:39.191492  784287 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-404149"
	I1115 11:48:39.191505  784287 addons.go:70] Setting dashboard=true in profile "embed-certs-404149"
	I1115 11:48:39.191514  784287 addons.go:70] Setting default-storageclass=true in profile "embed-certs-404149"
	I1115 11:48:39.191519  784287 addons.go:239] Setting addon dashboard=true in "embed-certs-404149"
	W1115 11:48:39.191525  784287 addons.go:248] addon dashboard should already be in state true
	I1115 11:48:39.191531  784287 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-404149"
	I1115 11:48:39.191565  784287 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:48:39.191837  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.192117  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.191507  784287 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-404149"
	W1115 11:48:39.192819  784287 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:48:39.192949  784287 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:48:39.193562  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.196869  784287 out.go:179] * Verifying Kubernetes components...
	I1115 11:48:39.202520  784287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:48:39.251452  784287 addons.go:239] Setting addon default-storageclass=true in "embed-certs-404149"
	W1115 11:48:39.251476  784287 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:48:39.251501  784287 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:48:39.251952  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.266881  784287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:48:39.267000  784287 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:48:39.271262  784287 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:48:39.271285  784287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:48:39.271351  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:39.281039  784287 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1115 11:48:36.439678  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:38.963734  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	I1115 11:48:39.284262  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:48:39.284288  784287 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:48:39.284356  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:39.306461  784287 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:48:39.306483  784287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:48:39.306561  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:39.330290  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:39.340978  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:39.364195  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:39.643527  784287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:48:39.686383  784287 node_ready.go:35] waiting up to 6m0s for node "embed-certs-404149" to be "Ready" ...
	I1115 11:48:39.691352  784287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:48:39.699530  784287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:48:39.713996  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:48:39.714098  784287 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:48:39.776756  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:48:39.776848  784287 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:48:39.815260  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:48:39.815362  784287 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:48:39.942365  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:48:39.942448  784287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:48:39.999810  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:48:39.999902  784287 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:48:40.050041  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:48:40.050074  784287 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:48:40.069473  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:48:40.069496  784287 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:48:40.092670  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:48:40.092692  784287 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:48:40.120783  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:48:40.120849  784287 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:48:40.150982  784287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:48:40.434798  781316 pod_ready.go:94] pod "coredns-66bc5c9577-xpkjw" is "Ready"
	I1115 11:48:40.434874  781316 pod_ready.go:86] duration metric: took 40.505224356s for pod "coredns-66bc5c9577-xpkjw" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.440492  781316 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.446429  781316 pod_ready.go:94] pod "etcd-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:40.446503  781316 pod_ready.go:86] duration metric: took 5.989473ms for pod "etcd-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.449058  781316 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.457634  781316 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:40.457655  781316 pod_ready.go:86] duration metric: took 8.530833ms for pod "kube-apiserver-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.460301  781316 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.632738  781316 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:40.632764  781316 pod_ready.go:86] duration metric: took 172.44373ms for pod "kube-controller-manager-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.832649  781316 pod_ready.go:83] waiting for pod "kube-proxy-j8s2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.233694  781316 pod_ready.go:94] pod "kube-proxy-j8s2w" is "Ready"
	I1115 11:48:41.233772  781316 pod_ready.go:86] duration metric: took 401.047865ms for pod "kube-proxy-j8s2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.432962  781316 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.833241  781316 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:41.833318  781316 pod_ready.go:86] duration metric: took 400.275384ms for pod "kube-scheduler-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.833346  781316 pod_ready.go:40] duration metric: took 41.9087395s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:48:41.926602  781316 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:48:41.929858  781316 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-769461" cluster and "default" namespace by default
	I1115 11:48:44.812053  784287 node_ready.go:49] node "embed-certs-404149" is "Ready"
	I1115 11:48:44.812080  784287 node_ready.go:38] duration metric: took 5.12559031s for node "embed-certs-404149" to be "Ready" ...
	I1115 11:48:44.812094  784287 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:48:44.812150  784287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:48:46.593919  784287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.90247671s)
	I1115 11:48:46.593989  784287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.894355488s)
	I1115 11:48:46.650644  784287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.499612838s)
	I1115 11:48:46.650925  784287 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.838743949s)
	I1115 11:48:46.650962  784287 api_server.go:72] duration metric: took 7.459932267s to wait for apiserver process to appear ...
	I1115 11:48:46.650982  784287 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:48:46.651015  784287 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:48:46.653827  784287 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-404149 addons enable metrics-server
	
	I1115 11:48:46.656772  784287 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 11:48:46.659577  784287 addons.go:515] duration metric: took 7.468173964s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 11:48:46.664440  784287 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:48:46.664461  784287 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:48:47.152060  784287 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:48:47.170073  784287 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 11:48:47.173783  784287 api_server.go:141] control plane version: v1.34.1
	I1115 11:48:47.173844  784287 api_server.go:131] duration metric: took 522.842244ms to wait for apiserver health ...
	I1115 11:48:47.173866  784287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:48:47.188636  784287 system_pods.go:59] 8 kube-system pods found
	I1115 11:48:47.188722  784287 system_pods.go:61] "coredns-66bc5c9577-2l449" [5e943487-c90a-4a5d-8954-6d44870ececc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:48:47.188746  784287 system_pods.go:61] "etcd-embed-certs-404149" [061e2652-8536-4564-bd3c-aa1d961acc3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:48:47.188785  784287 system_pods.go:61] "kindnet-qsvh7" [65b3cd6e-66ac-4934-91d3-16fdc27af287] Running
	I1115 11:48:47.188810  784287 system_pods.go:61] "kube-apiserver-embed-certs-404149" [df336c4d-f7c7-4ec6-98d8-dc1aef88cea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:48:47.188829  784287 system_pods.go:61] "kube-controller-manager-embed-certs-404149" [cb5308c4-97af-4752-9cd2-856eb8d915fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:48:47.188849  784287 system_pods.go:61] "kube-proxy-5d2lb" [be30c5c3-f080-4721-b6d8-2f18f7736abe] Running
	I1115 11:48:47.188908  784287 system_pods.go:61] "kube-scheduler-embed-certs-404149" [808c1b05-090a-4dd9-9c5b-53960a09c527] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:48:47.188932  784287 system_pods.go:61] "storage-provisioner" [7b6e6bb5-e4cf-486d-bfc3-d07a3848e221] Running
	I1115 11:48:47.188953  784287 system_pods.go:74] duration metric: took 15.069202ms to wait for pod list to return data ...
	I1115 11:48:47.188973  784287 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:48:47.193321  784287 default_sa.go:45] found service account: "default"
	I1115 11:48:47.193377  784287 default_sa.go:55] duration metric: took 4.383998ms for default service account to be created ...
	I1115 11:48:47.193401  784287 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:48:47.203395  784287 system_pods.go:86] 8 kube-system pods found
	I1115 11:48:47.203473  784287 system_pods.go:89] "coredns-66bc5c9577-2l449" [5e943487-c90a-4a5d-8954-6d44870ececc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:48:47.203498  784287 system_pods.go:89] "etcd-embed-certs-404149" [061e2652-8536-4564-bd3c-aa1d961acc3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:48:47.203535  784287 system_pods.go:89] "kindnet-qsvh7" [65b3cd6e-66ac-4934-91d3-16fdc27af287] Running
	I1115 11:48:47.203560  784287 system_pods.go:89] "kube-apiserver-embed-certs-404149" [df336c4d-f7c7-4ec6-98d8-dc1aef88cea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:48:47.203589  784287 system_pods.go:89] "kube-controller-manager-embed-certs-404149" [cb5308c4-97af-4752-9cd2-856eb8d915fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:48:47.203607  784287 system_pods.go:89] "kube-proxy-5d2lb" [be30c5c3-f080-4721-b6d8-2f18f7736abe] Running
	I1115 11:48:47.203641  784287 system_pods.go:89] "kube-scheduler-embed-certs-404149" [808c1b05-090a-4dd9-9c5b-53960a09c527] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:48:47.203665  784287 system_pods.go:89] "storage-provisioner" [7b6e6bb5-e4cf-486d-bfc3-d07a3848e221] Running
	I1115 11:48:47.203715  784287 system_pods.go:126] duration metric: took 10.295283ms to wait for k8s-apps to be running ...
	I1115 11:48:47.203746  784287 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:48:47.203824  784287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:48:47.220294  784287 system_svc.go:56] duration metric: took 16.539749ms WaitForService to wait for kubelet
	I1115 11:48:47.220363  784287 kubeadm.go:587] duration metric: took 8.029331364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:48:47.220398  784287 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:48:47.224107  784287 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:48:47.224181  784287 node_conditions.go:123] node cpu capacity is 2
	I1115 11:48:47.224207  784287 node_conditions.go:105] duration metric: took 3.787561ms to run NodePressure ...
	I1115 11:48:47.224230  784287 start.go:242] waiting for startup goroutines ...
	I1115 11:48:47.224264  784287 start.go:247] waiting for cluster config update ...
	I1115 11:48:47.224294  784287 start.go:256] writing updated cluster config ...
	I1115 11:48:47.224606  784287 ssh_runner.go:195] Run: rm -f paused
	I1115 11:48:47.230151  784287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:48:47.234291  784287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2l449" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:48:49.240913  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:48:51.242056  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:48:53.740285  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:48:55.741002  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.922833771Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.930441362Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.930597294Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.930669352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.933846732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.933992867Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.934065303Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.94351645Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.943681752Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.943765872Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.958225172Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.95837984Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.04056045Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a2591d98-e700-45b9-9dde-3957640dc151 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.042044593Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8becddf0-02be-439b-9c4a-784f330f81a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.043142369Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper" id=8589ac92-9fc8-43f9-817c-d5ff46753243 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.043306629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.070092519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.070937009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.090913808Z" level=info msg="Created container c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper" id=8589ac92-9fc8-43f9-817c-d5ff46753243 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.09193781Z" level=info msg="Starting container: c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207" id=a8c3a0c8-3fe8-41a2-8dfa-ba813fe056fb name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.096142607Z" level=info msg="Started container" PID=1736 containerID=c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper id=a8c3a0c8-3fe8-41a2-8dfa-ba813fe056fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb
	Nov 15 11:48:51 default-k8s-diff-port-769461 conmon[1734]: conmon c7f77e12165ebe38cbce <ninfo>: container 1736 exited with status 1
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.378149305Z" level=info msg="Removing container: 60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12" id=60b40971-3795-4fba-8d91-c2dc11820360 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.398020585Z" level=info msg="Error loading conmon cgroup of container 60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12: cgroup deleted" id=60b40971-3795-4fba-8d91-c2dc11820360 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.406159225Z" level=info msg="Removed container 60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper" id=60b40971-3795-4fba-8d91-c2dc11820360 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c7f77e12165eb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   cb5cb975766c3       dashboard-metrics-scraper-6ffb444bf9-bll9w             kubernetes-dashboard
	a6a07662328b4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   b92798ed3ce20       storage-provisioner                                    kube-system
	ba6623797a0ea       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   51 seconds ago       Running             kubernetes-dashboard        0                   c5880ae2cc4cf       kubernetes-dashboard-855c9754f9-dt85h                  kubernetes-dashboard
	1cfdbec99bdb1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   56d841e610312       coredns-66bc5c9577-xpkjw                               kube-system
	22ba9a62641b0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   f787dfc837d13       busybox                                                default
	096615ff4762f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   bc2a026debb0b       kube-proxy-j8s2w                                       kube-system
	339704fd3e18f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   908d3fa128f5c       kindnet-kzp2q                                          kube-system
	71751d3ff5736       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   b92798ed3ce20       storage-provisioner                                    kube-system
	58a8cafbd6582       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   7d2c1d7aa44de       kube-apiserver-default-k8s-diff-port-769461            kube-system
	c28f3e68692e8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9a5d02076fff6       kube-controller-manager-default-k8s-diff-port-769461   kube-system
	1222b8dec2b50       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6a5d2814a66f4       kube-scheduler-default-k8s-diff-port-769461            kube-system
	faf86f2f21163       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4b7cb4c3ee2c1       etcd-default-k8s-diff-port-769461                      kube-system
	
	
	==> coredns [1cfdbec99bdb1d48554aa742e63c6b88cb1485331ece237fddfb8403fadc953f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43488 - 38637 "HINFO IN 2909241615240097569.8602145173257113971. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014502773s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-769461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-769461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=default-k8s-diff-port-769461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_46_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:46:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-769461
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:48:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:47:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-769461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                2d12c0bf-fabd-4e79-9141-b51555b040a7
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-xpkjw                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-769461                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-kzp2q                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-769461             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-769461    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-j8s2w                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-769461             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bll9w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dt85h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-769461 event: Registered Node default-k8s-diff-port-769461 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-769461 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node default-k8s-diff-port-769461 event: Registered Node default-k8s-diff-port-769461 in Controller
	
	
	==> dmesg <==
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [faf86f2f211634e1d17c6370364e838bc04fe0108542f93851f68044cecfe2f9] <==
	{"level":"warn","ts":"2025-11-15T11:47:56.421563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.436545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.453455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.468795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.486956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.502537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.522535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.545904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.558996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.581050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.595126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.610206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.626212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.641163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.656661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.675608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.694676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.708523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.723954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.738500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.753962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.788652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.802638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.817574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.883641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52780","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:48:59 up  3:31,  0 user,  load average: 3.23, 3.20, 2.82
	Linux default-k8s-diff-port-769461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [339704fd3e18f7555facc0bf0fdf7754a2f2d41f8760e86bd1a5494e1c73869d] <==
	I1115 11:47:58.697031       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:47:58.697464       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:47:58.698216       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:47:58.698244       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:47:58.698292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:47:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:47:58.904159       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:47:58.904189       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:47:58.904206       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:47:58.904330       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:48:28.903162       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:48:28.903175       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:48:28.904433       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:48:28.904435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 11:48:30.205298       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:48:30.205331       1 metrics.go:72] Registering metrics
	I1115 11:48:30.205398       1 controller.go:711] "Syncing nftables rules"
	I1115 11:48:38.905012       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:48:38.905221       1 main.go:301] handling current node
	I1115 11:48:48.909101       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:48:48.909201       1 main.go:301] handling current node
	I1115 11:48:58.907984       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:48:58.908016       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58a8cafbd658243739209adc98b5cca4fb51708fc98f57d93b11c6d97859707b] <==
	I1115 11:47:57.922470       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:47:57.922521       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:47:57.926568       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:47:57.926590       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:47:57.926691       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:47:57.938294       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:47:57.938317       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:47:57.938324       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:47:57.938339       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:47:57.939485       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:47:57.944110       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:47:57.963850       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1115 11:47:57.969333       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:47:57.996139       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:47:58.098207       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:47:58.429118       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:47:58.850086       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:47:58.948722       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:47:59.026551       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:47:59.076845       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:47:59.354101       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.75.102"}
	I1115 11:47:59.378498       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.31.235"}
	I1115 11:48:01.369567       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:48:01.474465       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:48:01.568653       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c28f3e68692e829f48e01931512e3679a6223533e56ed8f074c9d056fafd4609] <==
	I1115 11:48:01.089673       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:48:01.093673       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:48:01.098416       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 11:48:01.113231       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:48:01.115587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:48:01.117000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:48:01.117138       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 11:48:01.117051       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:48:01.117856       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:48:01.119359       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:48:01.119510       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:48:01.120058       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:48:01.120135       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:48:01.120204       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:48:01.122084       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 11:48:01.122246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 11:48:01.125696       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:48:01.130028       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:48:01.131759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 11:48:01.136135       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:48:01.162156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:48:01.163394       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 11:48:01.163732       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:48:01.163769       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:48:01.163832       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [096615ff4762fd1030ea22975fbda2deeafa29564f3d4a4bc42cb7213d7bca2e] <==
	I1115 11:47:58.798189       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:47:58.968496       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:47:59.070364       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:47:59.072990       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:47:59.073080       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:47:59.192925       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:47:59.193052       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:47:59.198471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:47:59.199840       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:47:59.199917       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:47:59.205798       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:47:59.205869       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:47:59.206198       1 config.go:200] "Starting service config controller"
	I1115 11:47:59.206241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:47:59.206722       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:47:59.206765       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:47:59.207209       1 config.go:309] "Starting node config controller"
	I1115 11:47:59.207216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:47:59.207222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:47:59.312026       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:47:59.312090       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:47:59.312152       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1222b8dec2b50ece8a4af1cb27e223b6a0079f14fc1c5ecf88240ddba9fe0ee0] <==
	I1115 11:47:57.782216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:47:57.796003       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:47:57.796094       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:47:57.796114       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:47:57.796141       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 11:47:57.827143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:47:57.829227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:47:57.829315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:47:57.829382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:47:57.829473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:47:57.833411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:47:57.833523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:47:57.833596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:47:57.833709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:47:57.833767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:47:57.833825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:47:57.833900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:47:57.833957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:47:57.834042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:47:57.834163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:47:57.834331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:47:57.834375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:47:57.852496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:47:57.852669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1115 11:47:59.299173       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:48:01 default-k8s-diff-port-769461 kubelet[780]: W1115 11:48:01.992414     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-c5880ae2cc4cf84ca139cac35ab22d04d170e8b9609c17b272cb5186c2e96aa3 WatchSource:0}: Error finding container c5880ae2cc4cf84ca139cac35ab22d04d170e8b9609c17b272cb5186c2e96aa3: Status 404 returned error can't find the container with id c5880ae2cc4cf84ca139cac35ab22d04d170e8b9609c17b272cb5186c2e96aa3
	Nov 15 11:48:02 default-k8s-diff-port-769461 kubelet[780]: W1115 11:48:02.021379     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb WatchSource:0}: Error finding container cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb: Status 404 returned error can't find the container with id cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb
	Nov 15 11:48:07 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:07.309674     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dt85h" podStartSLOduration=1.440605714 podStartE2EDuration="6.309656798s" podCreationTimestamp="2025-11-15 11:48:01 +0000 UTC" firstStartedPulling="2025-11-15 11:48:01.996138232 +0000 UTC m=+10.112635242" lastFinishedPulling="2025-11-15 11:48:06.865189315 +0000 UTC m=+14.981686326" observedRunningTime="2025-11-15 11:48:07.3086565 +0000 UTC m=+15.425153519" watchObservedRunningTime="2025-11-15 11:48:07.309656798 +0000 UTC m=+15.426153809"
	Nov 15 11:48:12 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:12.259641     780 scope.go:117] "RemoveContainer" containerID="a724efbf495e16d52766b4b6cace9d9a566ec8dc057d3e7576be260ed7bd62db"
	Nov 15 11:48:13 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:13.264037     780 scope.go:117] "RemoveContainer" containerID="a724efbf495e16d52766b4b6cace9d9a566ec8dc057d3e7576be260ed7bd62db"
	Nov 15 11:48:13 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:13.264343     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:13 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:13.264490     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:14 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:14.268478     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:14 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:14.268637     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:15 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:15.993545     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:15 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:15.993736     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:27.038799     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:27.299671     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:27.300007     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:27.300157     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:29 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:29.307907     780 scope.go:117] "RemoveContainer" containerID="71751d3ff5736d4c1ddda1fdd64370dbafe7788b817bada80da23c901e7a380a"
	Nov 15 11:48:35 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:35.993393     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:35 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:35.993999     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:51.039493     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:51.364336     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:51.364692     780 scope.go:117] "RemoveContainer" containerID="c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:51.364904     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:55 default-k8s-diff-port-769461 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:48:55 default-k8s-diff-port-769461 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:48:55 default-k8s-diff-port-769461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ba6623797a0ea7a24e6b56ca1b002092d0fb220ca803cf4677a6087af7eee357] <==
	2025/11/15 11:48:06 Using namespace: kubernetes-dashboard
	2025/11/15 11:48:06 Using in-cluster config to connect to apiserver
	2025/11/15 11:48:06 Using secret token for csrf signing
	2025/11/15 11:48:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:48:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:48:06 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 11:48:06 Generating JWE encryption key
	2025/11/15 11:48:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:48:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:48:08 Initializing JWE encryption key from synchronized object
	2025/11/15 11:48:08 Creating in-cluster Sidecar client
	2025/11/15 11:48:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:48:08 Serving insecurely on HTTP port: 9090
	2025/11/15 11:48:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:48:06 Starting overwatch
	
	
	==> storage-provisioner [71751d3ff5736d4c1ddda1fdd64370dbafe7788b817bada80da23c901e7a380a] <==
	I1115 11:47:58.649880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:48:28.652247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a6a07662328b4265eb840a2cc587982ae3774637d07cd67bc54699170e319aab] <==
	W1115 11:48:29.369519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:32.824103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:37.085231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:40.683957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:43.738210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:46.761089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:46.766211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:48:46.766380       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:48:46.766538       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-769461_e629501c-487c-4d10-9b1f-49b11fb3658d!
	I1115 11:48:46.767413       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c930a73f-6b14-48e2-977d-fde466625e84", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-769461_e629501c-487c-4d10-9b1f-49b11fb3658d became leader
	W1115 11:48:46.771109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:46.789123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:48:46.868924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-769461_e629501c-487c-4d10-9b1f-49b11fb3658d!
	W1115 11:48:48.791788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:48.798422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:50.804542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:50.809999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:52.814278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:52.819993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:54.828080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:54.840271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:56.843339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:56.848613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:58.852649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:58.868894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461: exit status 2 (631.480485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-769461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-769461
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-769461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054",
	        "Created": "2025-11-15T11:46:05.665660971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 781446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:47:44.765651606Z",
	            "FinishedAt": "2025-11-15T11:47:43.91167659Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/hostname",
	        "HostsPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/hosts",
	        "LogPath": "/var/lib/docker/containers/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054-json.log",
	        "Name": "/default-k8s-diff-port-769461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-769461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-769461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054",
	                "LowerDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b4652a04669bd6a09fb7076cb3aa2068a43fcd682c401faf158afa049b1e75b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-769461",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-769461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-769461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-769461",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-769461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0133bc8e0970008bcf663efd77e35b231bdde0d40bdc8ff2779ca6d99568e6ed",
	            "SandboxKey": "/var/run/docker/netns/0133bc8e0970",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-769461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:43:63:8b:85:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "97f28ee3e21c22cb67f771931d4a0c5ff8297079a2da7de0d16d0518cb24266f",
	                    "EndpointID": "19fb2afd474ed42ee7c2a9482a51d9e9ed1d9a222adb5b13668c258a7a92d17d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-769461",
	                        "6bc3c2610e90"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461: exit status 2 (404.245563ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-769461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-769461 logs -n 25: (1.363107843s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-303284 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ delete  │ -p cert-options-303284                                                                                                                                                                                                                        │ cert-options-303284          │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:43 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:43 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-872969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │                     │
	│ stop    │ -p old-k8s-version-872969 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-872969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:44 UTC │
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-769461 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:48:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:48:31.334673  784287 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:48:31.334869  784287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:48:31.334899  784287 out.go:374] Setting ErrFile to fd 2...
	I1115 11:48:31.334926  784287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:48:31.335590  784287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:48:31.336044  784287 out.go:368] Setting JSON to false
	I1115 11:48:31.337158  784287 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12662,"bootTime":1763194649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:48:31.337261  784287 start.go:143] virtualization:  
	I1115 11:48:31.342177  784287 out.go:179] * [embed-certs-404149] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:48:31.345186  784287 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:48:31.345322  784287 notify.go:221] Checking for updates...
	I1115 11:48:31.351046  784287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:48:31.353910  784287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:48:31.356707  784287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:48:31.359463  784287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:48:31.362347  784287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:48:31.365972  784287 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:48:31.366611  784287 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:48:31.401420  784287 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:48:31.401616  784287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:48:31.465390  784287 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:48:31.455091536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:48:31.465505  784287 docker.go:319] overlay module found
	I1115 11:48:31.468901  784287 out.go:179] * Using the docker driver based on existing profile
	I1115 11:48:31.471896  784287 start.go:309] selected driver: docker
	I1115 11:48:31.471920  784287 start.go:930] validating driver "docker" against &{Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:48:31.472021  784287 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:48:31.472782  784287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:48:31.551803  784287 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:48:31.542417248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:48:31.552155  784287 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:48:31.552189  784287 cni.go:84] Creating CNI manager for ""
	I1115 11:48:31.552248  784287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:48:31.552294  784287 start.go:353] cluster config:
	{Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:48:31.555438  784287 out.go:179] * Starting "embed-certs-404149" primary control-plane node in "embed-certs-404149" cluster
	I1115 11:48:31.558267  784287 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:48:31.561151  784287 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:48:31.564118  784287 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:48:31.564107  784287 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:48:31.564165  784287 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:48:31.564175  784287 cache.go:65] Caching tarball of preloaded images
	I1115 11:48:31.564281  784287 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:48:31.564290  784287 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:48:31.564421  784287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/config.json ...
	I1115 11:48:31.584043  784287 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:48:31.584067  784287 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:48:31.584081  784287 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:48:31.584106  784287 start.go:360] acquireMachinesLock for embed-certs-404149: {Name:mka215e00af293eebe84cec598dbc8661faf4dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:48:31.584171  784287 start.go:364] duration metric: took 36.284µs to acquireMachinesLock for "embed-certs-404149"
	I1115 11:48:31.584194  784287 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:48:31.584200  784287 fix.go:54] fixHost starting: 
	I1115 11:48:31.584452  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:31.602115  784287 fix.go:112] recreateIfNeeded on embed-certs-404149: state=Stopped err=<nil>
	W1115 11:48:31.602148  784287 fix.go:138] unexpected machine state, will restart: <nil>
	W1115 11:48:29.934856  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:31.938522  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:34.435093  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	I1115 11:48:31.605367  784287 out.go:252] * Restarting existing docker container for "embed-certs-404149" ...
	I1115 11:48:31.605453  784287 cli_runner.go:164] Run: docker start embed-certs-404149
	I1115 11:48:31.859820  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:31.882017  784287 kic.go:430] container "embed-certs-404149" state is running.
	I1115 11:48:31.882510  784287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:48:31.909007  784287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/config.json ...
	I1115 11:48:31.909346  784287 machine.go:94] provisionDockerMachine start ...
	I1115 11:48:31.909509  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:31.943317  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:31.943640  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:31.943649  784287 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:48:31.944286  784287 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:48:35.108950  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-404149
	
	I1115 11:48:35.108978  784287 ubuntu.go:182] provisioning hostname "embed-certs-404149"
	I1115 11:48:35.109049  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.127351  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:35.127696  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:35.127713  784287 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-404149 && echo "embed-certs-404149" | sudo tee /etc/hostname
	I1115 11:48:35.292425  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-404149
	
	I1115 11:48:35.292544  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.312761  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:35.313119  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:35.313151  784287 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-404149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-404149/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-404149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:48:35.465096  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:48:35.465124  784287 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:48:35.465155  784287 ubuntu.go:190] setting up certificates
	I1115 11:48:35.465165  784287 provision.go:84] configureAuth start
	I1115 11:48:35.465223  784287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:48:35.482113  784287 provision.go:143] copyHostCerts
	I1115 11:48:35.482182  784287 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:48:35.482202  784287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:48:35.482282  784287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:48:35.482378  784287 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:48:35.482389  784287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:48:35.482415  784287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:48:35.482472  784287 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:48:35.482481  784287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:48:35.482506  784287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:48:35.482563  784287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.embed-certs-404149 san=[127.0.0.1 192.168.76.2 embed-certs-404149 localhost minikube]
	I1115 11:48:35.569528  784287 provision.go:177] copyRemoteCerts
	I1115 11:48:35.569597  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:48:35.569643  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.589878  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:35.697861  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:48:35.719325  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:48:35.739274  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:48:35.757531  784287 provision.go:87] duration metric: took 292.349901ms to configureAuth
	I1115 11:48:35.757565  784287 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:48:35.757764  784287 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:48:35.757870  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:35.775038  784287 main.go:143] libmachine: Using SSH client type: native
	I1115 11:48:35.775344  784287 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 11:48:35.775360  784287 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:48:36.123616  784287 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:48:36.123638  784287 machine.go:97] duration metric: took 4.214272552s to provisionDockerMachine
	I1115 11:48:36.123650  784287 start.go:293] postStartSetup for "embed-certs-404149" (driver="docker")
	I1115 11:48:36.123661  784287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:48:36.123722  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:48:36.123759  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.146248  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.253488  784287 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:48:36.256895  784287 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:48:36.256922  784287 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:48:36.256934  784287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:48:36.256995  784287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:48:36.257091  784287 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:48:36.257244  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:48:36.265208  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:48:36.283527  784287 start.go:296] duration metric: took 159.861565ms for postStartSetup
	I1115 11:48:36.283654  784287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:48:36.283748  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.301208  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.406516  784287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:48:36.411550  784287 fix.go:56] duration metric: took 4.827342425s for fixHost
	I1115 11:48:36.411585  784287 start.go:83] releasing machines lock for "embed-certs-404149", held for 4.827401215s
	I1115 11:48:36.411651  784287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-404149
	I1115 11:48:36.429341  784287 ssh_runner.go:195] Run: cat /version.json
	I1115 11:48:36.429412  784287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:48:36.429474  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.429416  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:36.451457  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.455674  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:36.665809  784287 ssh_runner.go:195] Run: systemctl --version
	I1115 11:48:36.672597  784287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:48:36.710201  784287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:48:36.714721  784287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:48:36.714824  784287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:48:36.724601  784287 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:48:36.724627  784287 start.go:496] detecting cgroup driver to use...
	I1115 11:48:36.724680  784287 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:48:36.724736  784287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:48:36.740345  784287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:48:36.753898  784287 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:48:36.753989  784287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:48:36.771070  784287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:48:36.787158  784287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:48:36.918533  784287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:48:37.040468  784287 docker.go:234] disabling docker service ...
	I1115 11:48:37.040569  784287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:48:37.056160  784287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:48:37.069487  784287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:48:37.188055  784287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:48:37.300164  784287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:48:37.313658  784287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:48:37.328848  784287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:48:37.328948  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.338985  784287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:48:37.339054  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.347886  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.356799  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.366123  784287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:48:37.374481  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.383310  784287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.391315  784287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:48:37.399966  784287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:48:37.409405  784287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:48:37.416729  784287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:48:37.542347  784287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:48:37.682070  784287 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:48:37.682144  784287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:48:37.686262  784287 start.go:564] Will wait 60s for crictl version
	I1115 11:48:37.686329  784287 ssh_runner.go:195] Run: which crictl
	I1115 11:48:37.689929  784287 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:48:37.714851  784287 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:48:37.714935  784287 ssh_runner.go:195] Run: crio --version
	I1115 11:48:37.744556  784287 ssh_runner.go:195] Run: crio --version
	I1115 11:48:37.780282  784287 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:48:37.783221  784287 cli_runner.go:164] Run: docker network inspect embed-certs-404149 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:48:37.799913  784287 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:48:37.804136  784287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:48:37.815015  784287 kubeadm.go:884] updating cluster {Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:48:37.815139  784287 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:48:37.815198  784287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:48:37.848473  784287 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:48:37.848500  784287 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:48:37.848557  784287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:48:37.879776  784287 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:48:37.879799  784287 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:48:37.879807  784287 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:48:37.879912  784287 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-404149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:48:37.879987  784287 ssh_runner.go:195] Run: crio config
	I1115 11:48:37.967367  784287 cni.go:84] Creating CNI manager for ""
	I1115 11:48:37.967430  784287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:48:37.967466  784287 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:48:37.967504  784287 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-404149 NodeName:embed-certs-404149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:48:37.967662  784287 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-404149"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:48:37.967758  784287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:48:37.978086  784287 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:48:37.978210  784287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:48:37.986101  784287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 11:48:38.002334  784287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:48:38.021586  784287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 11:48:38.039446  784287 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:48:38.045007  784287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:48:38.057493  784287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:48:38.178626  784287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:48:38.194187  784287 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149 for IP: 192.168.76.2
	I1115 11:48:38.194209  784287 certs.go:195] generating shared ca certs ...
	I1115 11:48:38.194226  784287 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:38.194368  784287 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:48:38.194432  784287 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:48:38.194446  784287 certs.go:257] generating profile certs ...
	I1115 11:48:38.194541  784287 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/client.key
	I1115 11:48:38.194611  784287 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key.feb77388
	I1115 11:48:38.194654  784287 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key
	I1115 11:48:38.194766  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:48:38.194799  784287 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:48:38.194812  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:48:38.194841  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:48:38.194866  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:48:38.194891  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:48:38.194934  784287 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:48:38.195589  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:48:38.218514  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:48:38.238898  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:48:38.259623  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:48:38.280826  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 11:48:38.302053  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:48:38.322361  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:48:38.352143  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/embed-certs-404149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 11:48:38.376887  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:48:38.398465  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:48:38.426379  784287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:48:38.448490  784287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:48:38.464129  784287 ssh_runner.go:195] Run: openssl version
	I1115 11:48:38.472126  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:48:38.481577  784287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:48:38.486689  784287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:48:38.486754  784287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:48:38.533900  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:48:38.541960  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:48:38.550370  784287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:48:38.555052  784287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:48:38.555124  784287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:48:38.596315  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:48:38.604179  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:48:38.612361  784287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:48:38.616111  784287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:48:38.616173  784287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:48:38.657496  784287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:48:38.665619  784287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:48:38.670071  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:48:38.716095  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:48:38.766741  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:48:38.812823  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:48:38.856654  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:48:38.918678  784287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:48:39.061363  784287 kubeadm.go:401] StartCluster: {Name:embed-certs-404149 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-404149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:48:39.061470  784287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:48:39.061584  784287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:48:39.132441  784287 cri.go:89] found id: ""
	I1115 11:48:39.132555  784287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:48:39.148317  784287 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:48:39.148348  784287 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:48:39.148446  784287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:48:39.160273  784287 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:48:39.161021  784287 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-404149" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:48:39.161362  784287 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-404149" cluster setting kubeconfig missing "embed-certs-404149" context setting]
	I1115 11:48:39.161942  784287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:39.164113  784287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:48:39.188651  784287 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 11:48:39.188697  784287 kubeadm.go:602] duration metric: took 40.323417ms to restartPrimaryControlPlane
	I1115 11:48:39.188713  784287 kubeadm.go:403] duration metric: took 127.365595ms to StartCluster
	I1115 11:48:39.188729  784287 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:39.188814  784287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:48:39.190736  784287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:48:39.190999  784287 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:48:39.191403  784287 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:48:39.191399  784287 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:48:39.191492  784287 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-404149"
	I1115 11:48:39.191505  784287 addons.go:70] Setting dashboard=true in profile "embed-certs-404149"
	I1115 11:48:39.191514  784287 addons.go:70] Setting default-storageclass=true in profile "embed-certs-404149"
	I1115 11:48:39.191519  784287 addons.go:239] Setting addon dashboard=true in "embed-certs-404149"
	W1115 11:48:39.191525  784287 addons.go:248] addon dashboard should already be in state true
	I1115 11:48:39.191531  784287 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-404149"
	I1115 11:48:39.191565  784287 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:48:39.191837  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.192117  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.191507  784287 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-404149"
	W1115 11:48:39.192819  784287 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:48:39.192949  784287 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:48:39.193562  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.196869  784287 out.go:179] * Verifying Kubernetes components...
	I1115 11:48:39.202520  784287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:48:39.251452  784287 addons.go:239] Setting addon default-storageclass=true in "embed-certs-404149"
	W1115 11:48:39.251476  784287 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:48:39.251501  784287 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:48:39.251952  784287 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:48:39.266881  784287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:48:39.267000  784287 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:48:39.271262  784287 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:48:39.271285  784287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:48:39.271351  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:39.281039  784287 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1115 11:48:36.439678  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	W1115 11:48:38.963734  781316 pod_ready.go:104] pod "coredns-66bc5c9577-xpkjw" is not "Ready", error: <nil>
	I1115 11:48:39.284262  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:48:39.284288  784287 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:48:39.284356  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:39.306461  784287 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:48:39.306483  784287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:48:39.306561  784287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:48:39.330290  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:39.340978  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:39.364195  784287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:48:39.643527  784287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:48:39.686383  784287 node_ready.go:35] waiting up to 6m0s for node "embed-certs-404149" to be "Ready" ...
	I1115 11:48:39.691352  784287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:48:39.699530  784287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:48:39.713996  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:48:39.714098  784287 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:48:39.776756  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:48:39.776848  784287 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:48:39.815260  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:48:39.815362  784287 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:48:39.942365  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:48:39.942448  784287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:48:39.999810  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:48:39.999902  784287 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:48:40.050041  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:48:40.050074  784287 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:48:40.069473  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:48:40.069496  784287 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:48:40.092670  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:48:40.092692  784287 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:48:40.120783  784287 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:48:40.120849  784287 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:48:40.150982  784287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:48:40.434798  781316 pod_ready.go:94] pod "coredns-66bc5c9577-xpkjw" is "Ready"
	I1115 11:48:40.434874  781316 pod_ready.go:86] duration metric: took 40.505224356s for pod "coredns-66bc5c9577-xpkjw" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.440492  781316 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.446429  781316 pod_ready.go:94] pod "etcd-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:40.446503  781316 pod_ready.go:86] duration metric: took 5.989473ms for pod "etcd-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.449058  781316 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.457634  781316 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:40.457655  781316 pod_ready.go:86] duration metric: took 8.530833ms for pod "kube-apiserver-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.460301  781316 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.632738  781316 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:40.632764  781316 pod_ready.go:86] duration metric: took 172.44373ms for pod "kube-controller-manager-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:40.832649  781316 pod_ready.go:83] waiting for pod "kube-proxy-j8s2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.233694  781316 pod_ready.go:94] pod "kube-proxy-j8s2w" is "Ready"
	I1115 11:48:41.233772  781316 pod_ready.go:86] duration metric: took 401.047865ms for pod "kube-proxy-j8s2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.432962  781316 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.833241  781316 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-769461" is "Ready"
	I1115 11:48:41.833318  781316 pod_ready.go:86] duration metric: took 400.275384ms for pod "kube-scheduler-default-k8s-diff-port-769461" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:48:41.833346  781316 pod_ready.go:40] duration metric: took 41.9087395s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:48:41.926602  781316 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:48:41.929858  781316 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-769461" cluster and "default" namespace by default
	I1115 11:48:44.812053  784287 node_ready.go:49] node "embed-certs-404149" is "Ready"
	I1115 11:48:44.812080  784287 node_ready.go:38] duration metric: took 5.12559031s for node "embed-certs-404149" to be "Ready" ...
	I1115 11:48:44.812094  784287 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:48:44.812150  784287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:48:46.593919  784287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.90247671s)
	I1115 11:48:46.593989  784287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.894355488s)
	I1115 11:48:46.650644  784287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.499612838s)
	I1115 11:48:46.650925  784287 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.838743949s)
	I1115 11:48:46.650962  784287 api_server.go:72] duration metric: took 7.459932267s to wait for apiserver process to appear ...
	I1115 11:48:46.650982  784287 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:48:46.651015  784287 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:48:46.653827  784287 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-404149 addons enable metrics-server
	
	I1115 11:48:46.656772  784287 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 11:48:46.659577  784287 addons.go:515] duration metric: took 7.468173964s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 11:48:46.664440  784287 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 11:48:46.664461  784287 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 11:48:47.152060  784287 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:48:47.170073  784287 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 11:48:47.173783  784287 api_server.go:141] control plane version: v1.34.1
	I1115 11:48:47.173844  784287 api_server.go:131] duration metric: took 522.842244ms to wait for apiserver health ...
	I1115 11:48:47.173866  784287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:48:47.188636  784287 system_pods.go:59] 8 kube-system pods found
	I1115 11:48:47.188722  784287 system_pods.go:61] "coredns-66bc5c9577-2l449" [5e943487-c90a-4a5d-8954-6d44870ececc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:48:47.188746  784287 system_pods.go:61] "etcd-embed-certs-404149" [061e2652-8536-4564-bd3c-aa1d961acc3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:48:47.188785  784287 system_pods.go:61] "kindnet-qsvh7" [65b3cd6e-66ac-4934-91d3-16fdc27af287] Running
	I1115 11:48:47.188810  784287 system_pods.go:61] "kube-apiserver-embed-certs-404149" [df336c4d-f7c7-4ec6-98d8-dc1aef88cea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:48:47.188829  784287 system_pods.go:61] "kube-controller-manager-embed-certs-404149" [cb5308c4-97af-4752-9cd2-856eb8d915fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:48:47.188849  784287 system_pods.go:61] "kube-proxy-5d2lb" [be30c5c3-f080-4721-b6d8-2f18f7736abe] Running
	I1115 11:48:47.188908  784287 system_pods.go:61] "kube-scheduler-embed-certs-404149" [808c1b05-090a-4dd9-9c5b-53960a09c527] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:48:47.188932  784287 system_pods.go:61] "storage-provisioner" [7b6e6bb5-e4cf-486d-bfc3-d07a3848e221] Running
	I1115 11:48:47.188953  784287 system_pods.go:74] duration metric: took 15.069202ms to wait for pod list to return data ...
	I1115 11:48:47.188973  784287 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:48:47.193321  784287 default_sa.go:45] found service account: "default"
	I1115 11:48:47.193377  784287 default_sa.go:55] duration metric: took 4.383998ms for default service account to be created ...
	I1115 11:48:47.193401  784287 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:48:47.203395  784287 system_pods.go:86] 8 kube-system pods found
	I1115 11:48:47.203473  784287 system_pods.go:89] "coredns-66bc5c9577-2l449" [5e943487-c90a-4a5d-8954-6d44870ececc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:48:47.203498  784287 system_pods.go:89] "etcd-embed-certs-404149" [061e2652-8536-4564-bd3c-aa1d961acc3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:48:47.203535  784287 system_pods.go:89] "kindnet-qsvh7" [65b3cd6e-66ac-4934-91d3-16fdc27af287] Running
	I1115 11:48:47.203560  784287 system_pods.go:89] "kube-apiserver-embed-certs-404149" [df336c4d-f7c7-4ec6-98d8-dc1aef88cea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:48:47.203589  784287 system_pods.go:89] "kube-controller-manager-embed-certs-404149" [cb5308c4-97af-4752-9cd2-856eb8d915fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:48:47.203607  784287 system_pods.go:89] "kube-proxy-5d2lb" [be30c5c3-f080-4721-b6d8-2f18f7736abe] Running
	I1115 11:48:47.203641  784287 system_pods.go:89] "kube-scheduler-embed-certs-404149" [808c1b05-090a-4dd9-9c5b-53960a09c527] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:48:47.203665  784287 system_pods.go:89] "storage-provisioner" [7b6e6bb5-e4cf-486d-bfc3-d07a3848e221] Running
	I1115 11:48:47.203715  784287 system_pods.go:126] duration metric: took 10.295283ms to wait for k8s-apps to be running ...
	I1115 11:48:47.203746  784287 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:48:47.203824  784287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:48:47.220294  784287 system_svc.go:56] duration metric: took 16.539749ms WaitForService to wait for kubelet
	I1115 11:48:47.220363  784287 kubeadm.go:587] duration metric: took 8.029331364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:48:47.220398  784287 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:48:47.224107  784287 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:48:47.224181  784287 node_conditions.go:123] node cpu capacity is 2
	I1115 11:48:47.224207  784287 node_conditions.go:105] duration metric: took 3.787561ms to run NodePressure ...
	I1115 11:48:47.224230  784287 start.go:242] waiting for startup goroutines ...
	I1115 11:48:47.224264  784287 start.go:247] waiting for cluster config update ...
	I1115 11:48:47.224294  784287 start.go:256] writing updated cluster config ...
	I1115 11:48:47.224606  784287 ssh_runner.go:195] Run: rm -f paused
	I1115 11:48:47.230151  784287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:48:47.234291  784287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2l449" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 11:48:49.240913  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:48:51.242056  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:48:53.740285  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:48:55.741002  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.922833771Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.930441362Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.930597294Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.930669352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.933846732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.933992867Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.934065303Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.94351645Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.943681752Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.943765872Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.958225172Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:48:38 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:38.95837984Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.04056045Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a2591d98-e700-45b9-9dde-3957640dc151 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.042044593Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8becddf0-02be-439b-9c4a-784f330f81a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.043142369Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper" id=8589ac92-9fc8-43f9-817c-d5ff46753243 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.043306629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.070092519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.070937009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.090913808Z" level=info msg="Created container c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper" id=8589ac92-9fc8-43f9-817c-d5ff46753243 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.09193781Z" level=info msg="Starting container: c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207" id=a8c3a0c8-3fe8-41a2-8dfa-ba813fe056fb name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.096142607Z" level=info msg="Started container" PID=1736 containerID=c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper id=a8c3a0c8-3fe8-41a2-8dfa-ba813fe056fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb
	Nov 15 11:48:51 default-k8s-diff-port-769461 conmon[1734]: conmon c7f77e12165ebe38cbce <ninfo>: container 1736 exited with status 1
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.378149305Z" level=info msg="Removing container: 60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12" id=60b40971-3795-4fba-8d91-c2dc11820360 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.398020585Z" level=info msg="Error loading conmon cgroup of container 60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12: cgroup deleted" id=60b40971-3795-4fba-8d91-c2dc11820360 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:48:51 default-k8s-diff-port-769461 crio[654]: time="2025-11-15T11:48:51.406159225Z" level=info msg="Removed container 60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w/dashboard-metrics-scraper" id=60b40971-3795-4fba-8d91-c2dc11820360 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c7f77e12165eb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   cb5cb975766c3       dashboard-metrics-scraper-6ffb444bf9-bll9w             kubernetes-dashboard
	a6a07662328b4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           32 seconds ago       Running             storage-provisioner         2                   b92798ed3ce20       storage-provisioner                                    kube-system
	ba6623797a0ea       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   54 seconds ago       Running             kubernetes-dashboard        0                   c5880ae2cc4cf       kubernetes-dashboard-855c9754f9-dt85h                  kubernetes-dashboard
	1cfdbec99bdb1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   56d841e610312       coredns-66bc5c9577-xpkjw                               kube-system
	22ba9a62641b0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   f787dfc837d13       busybox                                                default
	096615ff4762f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   bc2a026debb0b       kube-proxy-j8s2w                                       kube-system
	339704fd3e18f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   908d3fa128f5c       kindnet-kzp2q                                          kube-system
	71751d3ff5736       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   b92798ed3ce20       storage-provisioner                                    kube-system
	58a8cafbd6582       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   7d2c1d7aa44de       kube-apiserver-default-k8s-diff-port-769461            kube-system
	c28f3e68692e8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9a5d02076fff6       kube-controller-manager-default-k8s-diff-port-769461   kube-system
	1222b8dec2b50       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6a5d2814a66f4       kube-scheduler-default-k8s-diff-port-769461            kube-system
	faf86f2f21163       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4b7cb4c3ee2c1       etcd-default-k8s-diff-port-769461                      kube-system
	
	
	==> coredns [1cfdbec99bdb1d48554aa742e63c6b88cb1485331ece237fddfb8403fadc953f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43488 - 38637 "HINFO IN 2909241615240097569.8602145173257113971. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014502773s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-769461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-769461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=default-k8s-diff-port-769461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_46_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:46:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-769461
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:48:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:46:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:48:28 +0000   Sat, 15 Nov 2025 11:47:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-769461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                2d12c0bf-fabd-4e79-9141-b51555b040a7
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-xpkjw                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-769461                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-kzp2q                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-769461             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-769461    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-j8s2w                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-769461             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bll9w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dt85h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 62s                    kube-proxy       
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-769461 event: Registered Node default-k8s-diff-port-769461 in Controller
	  Normal   NodeReady                104s                   kubelet          Node default-k8s-diff-port-769461 status is now: NodeReady
	  Normal   Starting                 71s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-769461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                    node-controller  Node default-k8s-diff-port-769461 event: Registered Node default-k8s-diff-port-769461 in Controller
	
	
	==> dmesg <==
	[Nov15 11:25] overlayfs: idmapped layers are currently not supported
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [faf86f2f211634e1d17c6370364e838bc04fe0108542f93851f68044cecfe2f9] <==
	{"level":"warn","ts":"2025-11-15T11:47:56.421563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.436545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.453455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.468795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.486956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.502537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.522535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.545904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.558996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.581050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.595126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.610206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.626212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.641163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.656661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.675608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.694676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.708523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.723954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.738500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.753962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.788652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.802638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.817574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:47:56.883641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52780","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:49:02 up  3:31,  0 user,  load average: 3.21, 3.20, 2.82
	Linux default-k8s-diff-port-769461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [339704fd3e18f7555facc0bf0fdf7754a2f2d41f8760e86bd1a5494e1c73869d] <==
	I1115 11:47:58.697031       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:47:58.697464       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:47:58.698216       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:47:58.698244       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:47:58.698292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:47:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:47:58.904159       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:47:58.904189       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:47:58.904206       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:47:58.904330       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:48:28.903162       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:48:28.903175       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:48:28.904433       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:48:28.904435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 11:48:30.205298       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:48:30.205331       1 metrics.go:72] Registering metrics
	I1115 11:48:30.205398       1 controller.go:711] "Syncing nftables rules"
	I1115 11:48:38.905012       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:48:38.905221       1 main.go:301] handling current node
	I1115 11:48:48.909101       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:48:48.909201       1 main.go:301] handling current node
	I1115 11:48:58.907984       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:48:58.908016       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58a8cafbd658243739209adc98b5cca4fb51708fc98f57d93b11c6d97859707b] <==
	I1115 11:47:57.922470       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:47:57.922521       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:47:57.926568       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:47:57.926590       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:47:57.926691       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:47:57.938294       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:47:57.938317       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:47:57.938324       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:47:57.938339       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:47:57.939485       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:47:57.944110       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:47:57.963850       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1115 11:47:57.969333       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:47:57.996139       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:47:58.098207       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:47:58.429118       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:47:58.850086       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:47:58.948722       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:47:59.026551       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:47:59.076845       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:47:59.354101       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.75.102"}
	I1115 11:47:59.378498       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.31.235"}
	I1115 11:48:01.369567       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:48:01.474465       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:48:01.568653       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c28f3e68692e829f48e01931512e3679a6223533e56ed8f074c9d056fafd4609] <==
	I1115 11:48:01.089673       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:48:01.093673       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:48:01.098416       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 11:48:01.113231       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:48:01.115587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:48:01.117000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:48:01.117138       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 11:48:01.117051       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:48:01.117856       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:48:01.119359       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:48:01.119510       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:48:01.120058       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:48:01.120135       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:48:01.120204       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:48:01.122084       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 11:48:01.122246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 11:48:01.125696       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:48:01.130028       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:48:01.131759       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 11:48:01.136135       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:48:01.162156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:48:01.163394       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 11:48:01.163732       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:48:01.163769       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:48:01.163832       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [096615ff4762fd1030ea22975fbda2deeafa29564f3d4a4bc42cb7213d7bca2e] <==
	I1115 11:47:58.798189       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:47:58.968496       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:47:59.070364       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:47:59.072990       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:47:59.073080       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:47:59.192925       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:47:59.193052       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:47:59.198471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:47:59.199840       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:47:59.199917       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:47:59.205798       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:47:59.205869       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:47:59.206198       1 config.go:200] "Starting service config controller"
	I1115 11:47:59.206241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:47:59.206722       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:47:59.206765       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:47:59.207209       1 config.go:309] "Starting node config controller"
	I1115 11:47:59.207216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:47:59.207222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:47:59.312026       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:47:59.312090       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:47:59.312152       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1222b8dec2b50ece8a4af1cb27e223b6a0079f14fc1c5ecf88240ddba9fe0ee0] <==
	I1115 11:47:57.782216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:47:57.796003       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:47:57.796094       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:47:57.796114       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:47:57.796141       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 11:47:57.827143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:47:57.829227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:47:57.829315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:47:57.829382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:47:57.829473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:47:57.833411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:47:57.833523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:47:57.833596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:47:57.833709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:47:57.833767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:47:57.833825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:47:57.833900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:47:57.833957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:47:57.834042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:47:57.834163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:47:57.834331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:47:57.834375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:47:57.852496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:47:57.852669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1115 11:47:59.299173       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:48:01 default-k8s-diff-port-769461 kubelet[780]: W1115 11:48:01.992414     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-c5880ae2cc4cf84ca139cac35ab22d04d170e8b9609c17b272cb5186c2e96aa3 WatchSource:0}: Error finding container c5880ae2cc4cf84ca139cac35ab22d04d170e8b9609c17b272cb5186c2e96aa3: Status 404 returned error can't find the container with id c5880ae2cc4cf84ca139cac35ab22d04d170e8b9609c17b272cb5186c2e96aa3
	Nov 15 11:48:02 default-k8s-diff-port-769461 kubelet[780]: W1115 11:48:02.021379     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6bc3c2610e90ecf600d0f836db699e5186b3038e5ccdb1b4d8306dc349aa9054/crio-cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb WatchSource:0}: Error finding container cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb: Status 404 returned error can't find the container with id cb5cb975766c3619048bf903573f8f07dfc3fc89af5b7cd7c245ec66fea6e5bb
	Nov 15 11:48:07 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:07.309674     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dt85h" podStartSLOduration=1.440605714 podStartE2EDuration="6.309656798s" podCreationTimestamp="2025-11-15 11:48:01 +0000 UTC" firstStartedPulling="2025-11-15 11:48:01.996138232 +0000 UTC m=+10.112635242" lastFinishedPulling="2025-11-15 11:48:06.865189315 +0000 UTC m=+14.981686326" observedRunningTime="2025-11-15 11:48:07.3086565 +0000 UTC m=+15.425153519" watchObservedRunningTime="2025-11-15 11:48:07.309656798 +0000 UTC m=+15.426153809"
	Nov 15 11:48:12 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:12.259641     780 scope.go:117] "RemoveContainer" containerID="a724efbf495e16d52766b4b6cace9d9a566ec8dc057d3e7576be260ed7bd62db"
	Nov 15 11:48:13 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:13.264037     780 scope.go:117] "RemoveContainer" containerID="a724efbf495e16d52766b4b6cace9d9a566ec8dc057d3e7576be260ed7bd62db"
	Nov 15 11:48:13 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:13.264343     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:13 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:13.264490     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:14 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:14.268478     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:14 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:14.268637     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:15 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:15.993545     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:15 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:15.993736     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:27.038799     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:27.299671     780 scope.go:117] "RemoveContainer" containerID="ed827e2b985d4cb407886fec0fcb7c0dd1897397f3ea13db38942360d4f92a50"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:27.300007     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:27 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:27.300157     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:29 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:29.307907     780 scope.go:117] "RemoveContainer" containerID="71751d3ff5736d4c1ddda1fdd64370dbafe7788b817bada80da23c901e7a380a"
	Nov 15 11:48:35 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:35.993393     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:35 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:35.993999     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:51.039493     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:51.364336     780 scope.go:117] "RemoveContainer" containerID="60d38c935882c33e051eef1710d98c8b072bc32a6e2aa23e4c358e09294beb12"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: I1115 11:48:51.364692     780 scope.go:117] "RemoveContainer" containerID="c7f77e12165ebe38cbceae295cb465cfa8a6c106a6d30c3c8f42ceb144d7d207"
	Nov 15 11:48:51 default-k8s-diff-port-769461 kubelet[780]: E1115 11:48:51.364904     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bll9w_kubernetes-dashboard(872b83f0-6c19-4852-b060-34e579413e97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bll9w" podUID="872b83f0-6c19-4852-b060-34e579413e97"
	Nov 15 11:48:55 default-k8s-diff-port-769461 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:48:55 default-k8s-diff-port-769461 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:48:55 default-k8s-diff-port-769461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ba6623797a0ea7a24e6b56ca1b002092d0fb220ca803cf4677a6087af7eee357] <==
	2025/11/15 11:48:06 Using namespace: kubernetes-dashboard
	2025/11/15 11:48:06 Using in-cluster config to connect to apiserver
	2025/11/15 11:48:06 Using secret token for csrf signing
	2025/11/15 11:48:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:48:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:48:06 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 11:48:06 Generating JWE encryption key
	2025/11/15 11:48:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:48:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:48:08 Initializing JWE encryption key from synchronized object
	2025/11/15 11:48:08 Creating in-cluster Sidecar client
	2025/11/15 11:48:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:48:08 Serving insecurely on HTTP port: 9090
	2025/11/15 11:48:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:48:06 Starting overwatch
	
	
	==> storage-provisioner [71751d3ff5736d4c1ddda1fdd64370dbafe7788b817bada80da23c901e7a380a] <==
	I1115 11:47:58.649880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:48:28.652247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a6a07662328b4265eb840a2cc587982ae3774637d07cd67bc54699170e319aab] <==
	W1115 11:48:37.085231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:40.683957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:43.738210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:46.761089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:46.766211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:48:46.766380       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:48:46.766538       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-769461_e629501c-487c-4d10-9b1f-49b11fb3658d!
	I1115 11:48:46.767413       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c930a73f-6b14-48e2-977d-fde466625e84", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-769461_e629501c-487c-4d10-9b1f-49b11fb3658d became leader
	W1115 11:48:46.771109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:46.789123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:48:46.868924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-769461_e629501c-487c-4d10-9b1f-49b11fb3658d!
	W1115 11:48:48.791788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:48.798422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:50.804542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:50.809999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:52.814278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:52.819993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:54.828080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:54.840271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:56.843339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:56.848613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:58.852649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:48:58.868894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:00.884095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:00.891434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461: exit status 2 (357.340981ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-769461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-404149 --alsologtostderr -v=1
E1115 11:49:36.560650  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-404149 --alsologtostderr -v=1: exit status 80 (2.050094231s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-404149 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:49:34.790112  790482 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:49:34.790265  790482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:34.790272  790482 out.go:374] Setting ErrFile to fd 2...
	I1115 11:49:34.790277  790482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:34.790517  790482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:49:34.790791  790482 out.go:368] Setting JSON to false
	I1115 11:49:34.790808  790482 mustload.go:66] Loading cluster: embed-certs-404149
	I1115 11:49:34.791183  790482 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:34.791615  790482 cli_runner.go:164] Run: docker container inspect embed-certs-404149 --format={{.State.Status}}
	I1115 11:49:34.814456  790482 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:49:34.814779  790482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:34.892738  790482 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2025-11-15 11:49:34.882508591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:34.893446  790482 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-404149 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 11:49:34.897068  790482 out.go:179] * Pausing node embed-certs-404149 ... 
	I1115 11:49:34.900083  790482 host.go:66] Checking if "embed-certs-404149" exists ...
	I1115 11:49:34.900406  790482 ssh_runner.go:195] Run: systemctl --version
	I1115 11:49:34.900460  790482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-404149
	I1115 11:49:34.926272  790482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/embed-certs-404149/id_rsa Username:docker}
	I1115 11:49:35.032429  790482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:49:35.047342  790482 pause.go:52] kubelet running: true
	I1115 11:49:35.047415  790482 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:49:35.368466  790482 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:49:35.368573  790482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:49:35.452488  790482 cri.go:89] found id: "d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb"
	I1115 11:49:35.452553  790482 cri.go:89] found id: "09258c4ef606211d4569a1f07e1868b18902b874617b2f6556a7c2f17f7edb9d"
	I1115 11:49:35.452573  790482 cri.go:89] found id: "496e2fb54178ec02d3986f84953b12a001e15ee7cc882c83e58e00fbd053f25b"
	I1115 11:49:35.452594  790482 cri.go:89] found id: "ee0d081abddf11575d32cec2a52f4a3d14483c7066159cc5a3b99aa279f76238"
	I1115 11:49:35.452634  790482 cri.go:89] found id: "80745c56ff5e6bc966333b250babd40241909a49ddafe4142822f4aa0c5dfe6e"
	I1115 11:49:35.452661  790482 cri.go:89] found id: "8fe33f405cefe31c9ab389c51d0c2b2ca0f66c055679053ef5665058df3e4a50"
	I1115 11:49:35.452677  790482 cri.go:89] found id: "b3bad56f102bafd52e8e47890a2907bc310240d0d6905fdf10422d09d338938d"
	I1115 11:49:35.452694  790482 cri.go:89] found id: "9412dd63cbe6ee0643666a35f225412ac451380045d2849d5220158a0db17940"
	I1115 11:49:35.452712  790482 cri.go:89] found id: "f782a05f34be564eb380a59a1f625d50f0d686d350bc75f48d4e7b5587a399bb"
	I1115 11:49:35.452746  790482 cri.go:89] found id: "c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	I1115 11:49:35.452776  790482 cri.go:89] found id: "cc2532c9ae8316b0f9a928a64b853c1143cd4bf2cc7096607b847819a61c8908"
	I1115 11:49:35.452794  790482 cri.go:89] found id: ""
	I1115 11:49:35.452913  790482 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:49:35.467382  790482 retry.go:31] will retry after 283.760386ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:49:35Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:49:35.751991  790482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:49:35.766972  790482 pause.go:52] kubelet running: false
	I1115 11:49:35.767087  790482 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:49:35.968889  790482 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:49:35.968970  790482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:49:36.057941  790482 cri.go:89] found id: "d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb"
	I1115 11:49:36.057966  790482 cri.go:89] found id: "09258c4ef606211d4569a1f07e1868b18902b874617b2f6556a7c2f17f7edb9d"
	I1115 11:49:36.057971  790482 cri.go:89] found id: "496e2fb54178ec02d3986f84953b12a001e15ee7cc882c83e58e00fbd053f25b"
	I1115 11:49:36.057975  790482 cri.go:89] found id: "ee0d081abddf11575d32cec2a52f4a3d14483c7066159cc5a3b99aa279f76238"
	I1115 11:49:36.057979  790482 cri.go:89] found id: "80745c56ff5e6bc966333b250babd40241909a49ddafe4142822f4aa0c5dfe6e"
	I1115 11:49:36.057983  790482 cri.go:89] found id: "8fe33f405cefe31c9ab389c51d0c2b2ca0f66c055679053ef5665058df3e4a50"
	I1115 11:49:36.057986  790482 cri.go:89] found id: "b3bad56f102bafd52e8e47890a2907bc310240d0d6905fdf10422d09d338938d"
	I1115 11:49:36.057989  790482 cri.go:89] found id: "9412dd63cbe6ee0643666a35f225412ac451380045d2849d5220158a0db17940"
	I1115 11:49:36.057993  790482 cri.go:89] found id: "f782a05f34be564eb380a59a1f625d50f0d686d350bc75f48d4e7b5587a399bb"
	I1115 11:49:36.058000  790482 cri.go:89] found id: "c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	I1115 11:49:36.058003  790482 cri.go:89] found id: "cc2532c9ae8316b0f9a928a64b853c1143cd4bf2cc7096607b847819a61c8908"
	I1115 11:49:36.058007  790482 cri.go:89] found id: ""
	I1115 11:49:36.058055  790482 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:49:36.069172  790482 retry.go:31] will retry after 354.974977ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:49:36Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:49:36.424850  790482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:49:36.439610  790482 pause.go:52] kubelet running: false
	I1115 11:49:36.439668  790482 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:49:36.660388  790482 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:49:36.660471  790482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:49:36.739390  790482 cri.go:89] found id: "d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb"
	I1115 11:49:36.739411  790482 cri.go:89] found id: "09258c4ef606211d4569a1f07e1868b18902b874617b2f6556a7c2f17f7edb9d"
	I1115 11:49:36.739416  790482 cri.go:89] found id: "496e2fb54178ec02d3986f84953b12a001e15ee7cc882c83e58e00fbd053f25b"
	I1115 11:49:36.739419  790482 cri.go:89] found id: "ee0d081abddf11575d32cec2a52f4a3d14483c7066159cc5a3b99aa279f76238"
	I1115 11:49:36.739423  790482 cri.go:89] found id: "80745c56ff5e6bc966333b250babd40241909a49ddafe4142822f4aa0c5dfe6e"
	I1115 11:49:36.739426  790482 cri.go:89] found id: "8fe33f405cefe31c9ab389c51d0c2b2ca0f66c055679053ef5665058df3e4a50"
	I1115 11:49:36.739429  790482 cri.go:89] found id: "b3bad56f102bafd52e8e47890a2907bc310240d0d6905fdf10422d09d338938d"
	I1115 11:49:36.739432  790482 cri.go:89] found id: "9412dd63cbe6ee0643666a35f225412ac451380045d2849d5220158a0db17940"
	I1115 11:49:36.739435  790482 cri.go:89] found id: "f782a05f34be564eb380a59a1f625d50f0d686d350bc75f48d4e7b5587a399bb"
	I1115 11:49:36.739441  790482 cri.go:89] found id: "c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	I1115 11:49:36.739444  790482 cri.go:89] found id: "cc2532c9ae8316b0f9a928a64b853c1143cd4bf2cc7096607b847819a61c8908"
	I1115 11:49:36.739447  790482 cri.go:89] found id: ""
	I1115 11:49:36.739497  790482 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:49:36.754114  790482 out.go:203] 
	W1115 11:49:36.756996  790482 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:49:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:49:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 11:49:36.757067  790482 out.go:285] * 
	* 
	W1115 11:49:36.763837  790482 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 11:49:36.766795  790482 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-404149 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-404149
helpers_test.go:243: (dbg) docker inspect embed-certs-404149:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408",
	        "Created": "2025-11-15T11:46:51.97222958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 784416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:48:31.636067246Z",
	            "FinishedAt": "2025-11-15T11:48:30.783477133Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/hostname",
	        "HostsPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/hosts",
	        "LogPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408-json.log",
	        "Name": "/embed-certs-404149",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-404149:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-404149",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408",
	                "LowerDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-404149",
	                "Source": "/var/lib/docker/volumes/embed-certs-404149/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-404149",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-404149",
	                "name.minikube.sigs.k8s.io": "embed-certs-404149",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36e6f18627acf3d0af0ec3283356927ad4e178f512b995a769473ae566dcbcb1",
	            "SandboxKey": "/var/run/docker/netns/36e6f18627ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-404149": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:84:70:90:59:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bb35a9e63004fb5710c19eaa0fed0c73a27efd3fdd5fdafde151cb4543696cc",
	                    "EndpointID": "1f5fdfe5bebbc07506b83bf92e0662a88fc4344cfe8f72d8f9d209dbea13e156",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-404149",
	                        "69e998144c08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149: exit status 2 (426.034272ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-404149 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-404149 logs -n 25: (1.706604627s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-769461 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:49 UTC │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:49:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:49:06.158503  787845 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:49:06.158629  787845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:06.158638  787845 out.go:374] Setting ErrFile to fd 2...
	I1115 11:49:06.158643  787845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:06.158902  787845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:49:06.159296  787845 out.go:368] Setting JSON to false
	I1115 11:49:06.160251  787845 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12697,"bootTime":1763194649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:49:06.160318  787845 start.go:143] virtualization:  
	I1115 11:49:06.164115  787845 out.go:179] * [no-preload-126380] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:49:06.168127  787845 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:49:06.168285  787845 notify.go:221] Checking for updates...
	I1115 11:49:06.174308  787845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:49:06.177412  787845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:49:06.180372  787845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:49:06.183410  787845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:49:06.186354  787845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:49:06.189813  787845 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:06.190026  787845 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:49:06.221201  787845 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:49:06.221326  787845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:06.281235  787845 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:06.27166405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:06.281350  787845 docker.go:319] overlay module found
	I1115 11:49:06.284585  787845 out.go:179] * Using the docker driver based on user configuration
	W1115 11:49:02.740312  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:04.741077  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:06.287547  787845 start.go:309] selected driver: docker
	I1115 11:49:06.287568  787845 start.go:930] validating driver "docker" against <nil>
	I1115 11:49:06.287582  787845 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:49:06.288338  787845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:06.348646  787845 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:06.339411192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:06.348814  787845 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 11:49:06.349068  787845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:49:06.351906  787845 out.go:179] * Using Docker driver with root privileges
	I1115 11:49:06.354777  787845 cni.go:84] Creating CNI manager for ""
	I1115 11:49:06.354936  787845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:06.354950  787845 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:49:06.355033  787845 start.go:353] cluster config:
	{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:49:06.358064  787845 out.go:179] * Starting "no-preload-126380" primary control-plane node in "no-preload-126380" cluster
	I1115 11:49:06.360929  787845 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:49:06.363946  787845 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:49:06.366920  787845 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:06.366991  787845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:49:06.367051  787845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:49:06.367083  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json: {Name:mk9b4ca08b66711cad2f7c3ab350d005b0392d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:06.367336  787845 cache.go:107] acquiring lock: {Name:mk91726f44286832b0046d8499f5d58ff7ad2b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.367391  787845 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 11:49:06.367399  787845 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.615µs
	I1115 11:49:06.367407  787845 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 11:49:06.367424  787845 cache.go:107] acquiring lock: {Name:mk100238a706e702239a000cdfd80c281f376431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.367489  787845 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:06.367874  787845 cache.go:107] acquiring lock: {Name:mk15eeacf94b66be4392721a733df868bc784101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.367974  787845 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:06.368249  787845 cache.go:107] acquiring lock: {Name:mkb04d459fbb71ba8df962665fc7ab481f00418b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.368343  787845 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:06.368644  787845 cache.go:107] acquiring lock: {Name:mkb69d6ceae6b9540e167400909c918adeec9369 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.368746  787845 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:06.369041  787845 cache.go:107] acquiring lock: {Name:mk10696b84637583e56394b885fa921b6d221577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.369140  787845 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 11:49:06.369427  787845 cache.go:107] acquiring lock: {Name:mk87d816e36c32f87fd55930f6a9d59e6dfc4a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.369553  787845 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:06.369802  787845 cache.go:107] acquiring lock: {Name:mkd034e18ce491e5f4eb3166d5f81cee9da0de03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.369953  787845 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:06.372398  787845 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:06.372894  787845 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 11:49:06.373143  787845 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:06.373465  787845 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:06.373610  787845 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:06.373877  787845 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:06.374082  787845 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:06.398830  787845 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:49:06.398855  787845 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:49:06.398874  787845 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:49:06.398899  787845 start.go:360] acquireMachinesLock for no-preload-126380: {Name:mk5469ab80c2d37eee16becc95c7569af1cc4687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.399017  787845 start.go:364] duration metric: took 96.887µs to acquireMachinesLock for "no-preload-126380"
	I1115 11:49:06.399046  787845 start.go:93] Provisioning new machine with config: &{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:49:06.399114  787845 start.go:125] createHost starting for "" (driver="docker")
	I1115 11:49:06.404637  787845 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:49:06.404911  787845 start.go:159] libmachine.API.Create for "no-preload-126380" (driver="docker")
	I1115 11:49:06.404949  787845 client.go:173] LocalClient.Create starting
	I1115 11:49:06.405034  787845 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:49:06.405071  787845 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:06.405087  787845 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:06.405143  787845 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:49:06.405191  787845 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:06.405208  787845 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:06.405688  787845 cli_runner.go:164] Run: docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:49:06.430371  787845 cli_runner.go:211] docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:49:06.430460  787845 network_create.go:284] running [docker network inspect no-preload-126380] to gather additional debugging logs...
	I1115 11:49:06.430483  787845 cli_runner.go:164] Run: docker network inspect no-preload-126380
	W1115 11:49:06.447951  787845 cli_runner.go:211] docker network inspect no-preload-126380 returned with exit code 1
	I1115 11:49:06.447981  787845 network_create.go:287] error running [docker network inspect no-preload-126380]: docker network inspect no-preload-126380: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-126380 not found
	I1115 11:49:06.448009  787845 network_create.go:289] output of [docker network inspect no-preload-126380]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-126380 not found
	
	** /stderr **
	I1115 11:49:06.448099  787845 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:49:06.464200  787845 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:49:06.464545  787845 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:49:06.465024  787845 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:49:06.465435  787845 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7bb35a9e6300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:2f:88:7f:d7:d9} reservation:<nil>}
	I1115 11:49:06.466375  787845 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bcbf50}
	I1115 11:49:06.466451  787845 network_create.go:124] attempt to create docker network no-preload-126380 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 11:49:06.466541  787845 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-126380 no-preload-126380
	I1115 11:49:06.542635  787845 network_create.go:108] docker network no-preload-126380 192.168.85.0/24 created
	I1115 11:49:06.542669  787845 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-126380" container
	I1115 11:49:06.542741  787845 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:49:06.559981  787845 cli_runner.go:164] Run: docker volume create no-preload-126380 --label name.minikube.sigs.k8s.io=no-preload-126380 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:49:06.577852  787845 oci.go:103] Successfully created a docker volume no-preload-126380
	I1115 11:49:06.577938  787845 cli_runner.go:164] Run: docker run --rm --name no-preload-126380-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-126380 --entrypoint /usr/bin/test -v no-preload-126380:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:49:06.729865  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 11:49:06.740708  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 11:49:06.747220  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1115 11:49:06.747884  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 11:49:06.813067  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1115 11:49:06.813140  787845 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 444.102936ms
	I1115 11:49:06.813175  787845 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 11:49:06.824076  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 11:49:06.830279  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 11:49:06.836706  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1115 11:49:07.193812  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 11:49:07.193880  787845 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 825.239178ms
	I1115 11:49:07.193906  787845 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 11:49:07.242096  787845 oci.go:107] Successfully prepared a docker volume no-preload-126380
	I1115 11:49:07.242139  787845 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1115 11:49:07.242270  787845 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:49:07.242461  787845 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:49:07.301859  787845 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-126380 --name no-preload-126380 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-126380 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-126380 --network no-preload-126380 --ip 192.168.85.2 --volume no-preload-126380:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:49:07.726790  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 11:49:07.726867  787845 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.357069835s
	I1115 11:49:07.726898  787845 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 11:49:07.728287  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 11:49:07.728325  787845 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.360079098s
	I1115 11:49:07.728335  787845 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 11:49:07.761573  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Running}}
	I1115 11:49:07.825004  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:07.833645  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 11:49:07.833832  787845 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.465959812s
	I1115 11:49:07.833862  787845 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 11:49:07.884006  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 11:49:07.884039  787845 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.516620411s
	I1115 11:49:07.884052  787845 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 11:49:07.911472  787845 cli_runner.go:164] Run: docker exec no-preload-126380 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:49:08.025147  787845 oci.go:144] the created container "no-preload-126380" has a running status.
	I1115 11:49:08.025190  787845 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa...
	I1115 11:49:08.475671  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 11:49:08.478018  787845 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.108562311s
	I1115 11:49:08.478058  787845 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 11:49:08.478224  787845 cache.go:87] Successfully saved all images to host disk.
	I1115 11:49:08.679324  787845 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:49:08.708224  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:08.730008  787845 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:49:08.730031  787845 kic_runner.go:114] Args: [docker exec --privileged no-preload-126380 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:49:08.802822  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:08.820387  787845 machine.go:94] provisionDockerMachine start ...
	I1115 11:49:08.822839  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:08.843712  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:08.844040  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:08.844050  787845 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:49:09.009342  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:49:09.009416  787845 ubuntu.go:182] provisioning hostname "no-preload-126380"
	I1115 11:49:09.009505  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:09.031757  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:09.032113  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:09.032130  787845 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-126380 && echo "no-preload-126380" | sudo tee /etc/hostname
	I1115 11:49:09.243819  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:49:09.243964  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:09.264313  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:09.264758  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:09.264816  787845 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-126380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-126380/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-126380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:49:09.425237  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:49:09.425260  787845 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:49:09.425290  787845 ubuntu.go:190] setting up certificates
	I1115 11:49:09.425301  787845 provision.go:84] configureAuth start
	I1115 11:49:09.425360  787845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:49:09.444799  787845 provision.go:143] copyHostCerts
	I1115 11:49:09.444972  787845 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:49:09.444989  787845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:49:09.445075  787845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:49:09.445184  787845 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:49:09.445195  787845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:49:09.445224  787845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:49:09.445285  787845 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:49:09.445294  787845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:49:09.445318  787845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:49:09.445368  787845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.no-preload-126380 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-126380]
	I1115 11:49:09.872630  787845 provision.go:177] copyRemoteCerts
	I1115 11:49:09.872700  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:49:09.872753  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:09.890843  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.007674  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:49:10.031009  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:49:10.050550  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:49:10.069317  787845 provision.go:87] duration metric: took 643.993558ms to configureAuth
	I1115 11:49:10.069351  787845 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:49:10.069542  787845 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:10.069656  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.088205  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:10.088538  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:10.088572  787845 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:49:10.435553  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:49:10.435575  787845 machine.go:97] duration metric: took 1.615164588s to provisionDockerMachine
	I1115 11:49:10.435585  787845 client.go:176] duration metric: took 4.030626607s to LocalClient.Create
	I1115 11:49:10.435604  787845 start.go:167] duration metric: took 4.030695465s to libmachine.API.Create "no-preload-126380"
	I1115 11:49:10.435612  787845 start.go:293] postStartSetup for "no-preload-126380" (driver="docker")
	I1115 11:49:10.435622  787845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:49:10.435700  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:49:10.435743  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.456713  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.565104  787845 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:49:10.568420  787845 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:49:10.568449  787845 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:49:10.568460  787845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:49:10.568515  787845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:49:10.568606  787845 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:49:10.568716  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:49:10.576120  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:49:10.594171  787845 start.go:296] duration metric: took 158.543458ms for postStartSetup
	I1115 11:49:10.594583  787845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:49:10.613997  787845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:49:10.614283  787845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:49:10.614337  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.630885  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.734151  787845 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:49:10.744102  787845 start.go:128] duration metric: took 4.344972701s to createHost
	I1115 11:49:10.744130  787845 start.go:83] releasing machines lock for "no-preload-126380", held for 4.345100982s
	I1115 11:49:10.744204  787845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:49:10.763549  787845 ssh_runner.go:195] Run: cat /version.json
	I1115 11:49:10.763604  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.763848  787845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:49:10.763916  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.783083  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.793000  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.888893  787845 ssh_runner.go:195] Run: systemctl --version
	I1115 11:49:10.981372  787845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:49:11.027234  787845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:49:11.031620  787845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:49:11.031744  787845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:49:11.062333  787845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:49:11.062356  787845 start.go:496] detecting cgroup driver to use...
	I1115 11:49:11.062391  787845 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:49:11.062446  787845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:49:11.081123  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:49:11.095421  787845 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:49:11.095545  787845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:49:11.117678  787845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:49:11.137527  787845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1115 11:49:07.241810  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:09.741868  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:11.264935  787845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:49:11.388357  787845 docker.go:234] disabling docker service ...
	I1115 11:49:11.388542  787845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:49:11.414441  787845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:49:11.429701  787845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:49:11.548903  787845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:49:11.685776  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:49:11.700271  787845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:49:11.715290  787845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:49:11.715358  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.725424  787845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:49:11.725534  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.736389  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.750075  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.760292  787845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:49:11.769000  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.778072  787845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.792073  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.801814  787845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:49:11.809649  787845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:49:11.817594  787845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:49:11.930348  787845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:49:12.055294  787845 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:49:12.055410  787845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:49:12.059332  787845 start.go:564] Will wait 60s for crictl version
	I1115 11:49:12.059393  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.063097  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:49:12.091898  787845 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:49:12.092049  787845 ssh_runner.go:195] Run: crio --version
	I1115 11:49:12.122529  787845 ssh_runner.go:195] Run: crio --version
	I1115 11:49:12.156121  787845 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:49:12.158988  787845 cli_runner.go:164] Run: docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:49:12.174824  787845 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:49:12.178523  787845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:49:12.188129  787845 kubeadm.go:884] updating cluster {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:49:12.188243  787845 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:12.188293  787845 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:49:12.213236  787845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 11:49:12.213262  787845 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1115 11:49:12.213308  787845 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:12.213336  787845 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.213506  787845 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.213515  787845 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 11:49:12.213606  787845 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.213610  787845 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.213701  787845 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.213707  787845 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.215562  787845 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.215837  787845 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 11:49:12.216054  787845 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.216247  787845 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:12.216286  787845 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.216468  787845 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.216599  787845 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.216727  787845 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.466521  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.466632  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.472640  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.474837  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.483462  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1115 11:49:12.485706  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.517230  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.654157  787845 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1115 11:49:12.654254  787845 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.654349  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.654522  787845 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1115 11:49:12.654603  787845 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.654666  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.656565  787845 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1115 11:49:12.656709  787845 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.656796  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.678249  787845 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1115 11:49:12.678497  787845 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.678370  787845 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1115 11:49:12.678558  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.678591  787845 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1115 11:49:12.678649  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.678468  787845 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1115 11:49:12.678713  787845 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.678748  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.689020  787845 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1115 11:49:12.689063  787845 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.689120  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.689197  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.689230  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.689293  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.689325  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.689363  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 11:49:12.689199  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.735583  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.737093  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.811531  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.811706  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 11:49:12.811803  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.811888  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.837777  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.847590  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.847711  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.910223  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.910328  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 11:49:12.910429  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.910460  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.957830  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.975297  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.975372  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 11:49:12.975450  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 11:49:13.027686  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1115 11:49:13.027856  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1115 11:49:13.028062  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 11:49:13.028171  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 11:49:13.027935  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1115 11:49:13.028321  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1115 11:49:13.027990  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 11:49:13.028467  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 11:49:13.051341  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 11:49:13.051451  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1115 11:49:13.051452  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1115 11:49:13.051510  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1115 11:49:13.051548  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 11:49:13.051611  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1115 11:49:13.051630  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1115 11:49:13.051682  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1115 11:49:13.051700  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1115 11:49:13.051740  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 11:49:13.051760  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1115 11:49:13.051589  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1115 11:49:13.051910  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1115 11:49:13.051912  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1115 11:49:13.076630  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1115 11:49:13.076724  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1115 11:49:13.076844  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1115 11:49:13.076940  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1115 11:49:13.135004  787845 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1115 11:49:13.135163  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1115 11:49:13.561620  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1115 11:49:13.561698  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 11:49:13.561777  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1115 11:49:13.680914  787845 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1115 11:49:13.681159  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:15.305713  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.743894953s)
	I1115 11:49:15.305776  787845 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.624571011s)
	I1115 11:49:15.305801  787845 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1115 11:49:15.305839  787845 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:15.305892  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:15.305955  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1115 11:49:15.305982  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 11:49:15.306009  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1115 11:49:12.240567  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:14.741510  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:16.636929  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.330899294s)
	I1115 11:49:16.636953  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1115 11:49:16.636970  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 11:49:16.637016  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 11:49:16.637076  787845 ssh_runner.go:235] Completed: which crictl: (1.331171608s)
	I1115 11:49:16.637109  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:16.709368  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:18.105639  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.468600984s)
	I1115 11:49:18.105667  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1115 11:49:18.105686  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 11:49:18.105734  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 11:49:18.105805  787845 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.396413073s)
	I1115 11:49:18.105846  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:19.434756  787845 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.328881511s)
	I1115 11:49:19.434808  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1115 11:49:19.434903  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1115 11:49:19.434955  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.329200554s)
	I1115 11:49:19.434975  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1115 11:49:19.434997  787845 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1115 11:49:19.435040  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	W1115 11:49:16.742216  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:19.249257  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:21.241932  784287 pod_ready.go:94] pod "coredns-66bc5c9577-2l449" is "Ready"
	I1115 11:49:21.241965  784287 pod_ready.go:86] duration metric: took 34.007639528s for pod "coredns-66bc5c9577-2l449" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.244954  784287 pod_ready.go:83] waiting for pod "etcd-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.250765  784287 pod_ready.go:94] pod "etcd-embed-certs-404149" is "Ready"
	I1115 11:49:21.250801  784287 pod_ready.go:86] duration metric: took 5.810556ms for pod "etcd-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.254481  784287 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.261741  784287 pod_ready.go:94] pod "kube-apiserver-embed-certs-404149" is "Ready"
	I1115 11:49:21.261774  784287 pod_ready.go:86] duration metric: took 7.259236ms for pod "kube-apiserver-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.264695  784287 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.440031  784287 pod_ready.go:94] pod "kube-controller-manager-embed-certs-404149" is "Ready"
	I1115 11:49:21.440067  784287 pod_ready.go:86] duration metric: took 175.338405ms for pod "kube-controller-manager-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.639338  784287 pod_ready.go:83] waiting for pod "kube-proxy-5d2lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.039530  784287 pod_ready.go:94] pod "kube-proxy-5d2lb" is "Ready"
	I1115 11:49:22.039574  784287 pod_ready.go:86] duration metric: took 400.202486ms for pod "kube-proxy-5d2lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.238924  784287 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.639015  784287 pod_ready.go:94] pod "kube-scheduler-embed-certs-404149" is "Ready"
	I1115 11:49:22.639047  784287 pod_ready.go:86] duration metric: took 400.093404ms for pod "kube-scheduler-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.639060  784287 pod_ready.go:40] duration metric: took 35.408844515s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:49:22.727121  784287 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:49:22.731768  784287 out.go:179] * Done! kubectl is now configured to use "embed-certs-404149" cluster and "default" namespace by default
	I1115 11:49:21.184103  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.749043738s)
	I1115 11:49:21.184133  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1115 11:49:21.184138  787845 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.749216605s)
	I1115 11:49:21.184152  787845 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1115 11:49:21.184161  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1115 11:49:21.184184  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1115 11:49:21.184201  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1115 11:49:25.115546  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.931323402s)
	I1115 11:49:25.115578  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1115 11:49:25.115596  787845 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1115 11:49:25.115646  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1115 11:49:25.743592  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1115 11:49:25.743625  787845 cache_images.go:125] Successfully loaded all cached images
	I1115 11:49:25.743631  787845 cache_images.go:94] duration metric: took 13.530354185s to LoadCachedImages
	I1115 11:49:25.743643  787845 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 11:49:25.743732  787845 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-126380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:49:25.743819  787845 ssh_runner.go:195] Run: crio config
	I1115 11:49:25.809912  787845 cni.go:84] Creating CNI manager for ""
	I1115 11:49:25.809937  787845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:25.809953  787845 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:49:25.809977  787845 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-126380 NodeName:no-preload-126380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:49:25.810324  787845 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-126380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:49:25.810414  787845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:49:25.823131  787845 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1115 11:49:25.823199  787845 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1115 11:49:25.831459  787845 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1115 11:49:25.831550  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1115 11:49:25.832500  787845 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1115 11:49:25.832503  787845 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1115 11:49:25.836233  787845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1115 11:49:25.836268  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1115 11:49:26.669444  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:49:26.693463  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1115 11:49:26.701767  787845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1115 11:49:26.701804  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1115 11:49:26.790692  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1115 11:49:26.808783  787845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1115 11:49:26.808897  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1115 11:49:27.339578  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:49:27.347559  787845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:49:27.361750  787845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:49:27.374655  787845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 11:49:27.391923  787845 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:49:27.395426  787845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:49:27.405010  787845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:49:27.526905  787845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:49:27.544569  787845 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380 for IP: 192.168.85.2
	I1115 11:49:27.544588  787845 certs.go:195] generating shared ca certs ...
	I1115 11:49:27.544614  787845 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:27.544754  787845 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:49:27.544794  787845 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:49:27.544801  787845 certs.go:257] generating profile certs ...
	I1115 11:49:27.544958  787845 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key
	I1115 11:49:27.544975  787845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt with IP's: []
	I1115 11:49:27.960655  787845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt ...
	I1115 11:49:27.960688  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: {Name:mk40d5f9049445c76d7ff12fc64f93eb3900925d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:27.960898  787845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key ...
	I1115 11:49:27.960911  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key: {Name:mkf193e03cbd780b09ed1a5bc0b40e4fdb1d3987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:27.961014  787845 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb
	I1115 11:49:27.961030  787845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 11:49:28.319180  787845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb ...
	I1115 11:49:28.319214  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb: {Name:mkf9e268be0128d91467436a8d4d4b86b7104140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.319402  787845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb ...
	I1115 11:49:28.319416  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb: {Name:mkebef29ef024ee0a65394a2500f7f9420bbb238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.319495  787845 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt
	I1115 11:49:28.319574  787845 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key
	I1115 11:49:28.319634  787845 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key
	I1115 11:49:28.319650  787845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt with IP's: []
	I1115 11:49:28.737729  787845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt ...
	I1115 11:49:28.737760  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt: {Name:mk2482c56b63a21a5d9bea5eecaefa4ad9a4649e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.737949  787845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key ...
	I1115 11:49:28.737962  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key: {Name:mka42158b2d97a744a1695a70b24050ff2a02587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.738155  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:49:28.738199  787845 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:49:28.738215  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:49:28.738245  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:49:28.738273  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:49:28.738301  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:49:28.738346  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:49:28.738921  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:49:28.757998  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:49:28.776793  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:49:28.794451  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:49:28.812436  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:49:28.831976  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:49:28.851065  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:49:28.869009  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:49:28.888022  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:49:28.906375  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:49:28.924079  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:49:28.942219  787845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:49:28.955319  787845 ssh_runner.go:195] Run: openssl version
	I1115 11:49:28.961550  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:49:28.970021  787845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:49:28.974470  787845 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:49:28.974538  787845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:49:29.015757  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:49:29.024426  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:49:29.033019  787845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:49:29.036746  787845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:49:29.036809  787845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:49:29.077567  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:49:29.085981  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:49:29.094929  787845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:49:29.098711  787845 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:49:29.098809  787845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:49:29.144442  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:49:29.152996  787845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:49:29.156741  787845 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:49:29.156796  787845 kubeadm.go:401] StartCluster: {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:49:29.157018  787845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:49:29.157082  787845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:49:29.198212  787845 cri.go:89] found id: ""
	I1115 11:49:29.198334  787845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:49:29.207223  787845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:49:29.216060  787845 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:49:29.216157  787845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:49:29.227027  787845 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:49:29.227049  787845 kubeadm.go:158] found existing configuration files:
	
	I1115 11:49:29.227114  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:49:29.237799  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:49:29.237878  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:49:29.245289  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:49:29.255742  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:49:29.255840  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:49:29.263853  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:49:29.271633  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:49:29.271702  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:49:29.279788  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:49:29.287191  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:49:29.287295  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:49:29.294741  787845 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:49:29.337847  787845 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 11:49:29.337916  787845 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:49:29.359637  787845 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:49:29.359723  787845 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:49:29.359766  787845 kubeadm.go:319] OS: Linux
	I1115 11:49:29.359824  787845 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:49:29.359884  787845 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:49:29.359937  787845 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:49:29.359992  787845 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:49:29.360047  787845 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:49:29.360102  787845 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:49:29.360154  787845 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:49:29.360208  787845 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:49:29.360259  787845 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:49:29.446201  787845 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:49:29.446322  787845 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:49:29.446421  787845 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 11:49:29.468848  787845 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 11:49:29.475576  787845 out.go:252]   - Generating certificates and keys ...
	I1115 11:49:29.475772  787845 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:49:29.475907  787845 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:49:29.943730  787845 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:49:30.614103  787845 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 11:49:31.169774  787845 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 11:49:32.518859  787845 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:49:33.054537  787845 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:49:33.054921  787845 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-126380] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 11:49:33.232654  787845 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:49:33.233091  787845 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-126380] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 11:49:33.425384  787845 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:49:33.675606  787845 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:49:33.909070  787845 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:49:33.909406  787845 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:49:34.841037  787845 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:49:35.216684  787845 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 11:49:35.825940  787845 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:49:36.505331  787845 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:49:36.880137  787845 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:49:36.881966  787845 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:49:36.889040  787845 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.6393187Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=607b5834-8407-413a-8f7c-75835d05d699 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.6517989Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f043b3b6-329f-463c-9176-226c69669912 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.651911639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665012629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665276484Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0662ec9a703106e23bd2dc9d61e5a2f020a180bb8d541a5b6f7638311c4dfb07/merged/etc/passwd: no such file or directory"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665301453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0662ec9a703106e23bd2dc9d61e5a2f020a180bb8d541a5b6f7638311c4dfb07/merged/etc/group: no such file or directory"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665563216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.69165216Z" level=info msg="Created container d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb: kube-system/storage-provisioner/storage-provisioner" id=f043b3b6-329f-463c-9176-226c69669912 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.693036167Z" level=info msg="Starting container: d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb" id=c2ad4da9-23e6-4b4e-9c79-abb59286dc69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.702453105Z" level=info msg="Started container" PID=1631 containerID=d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb description=kube-system/storage-provisioner/storage-provisioner id=c2ad4da9-23e6-4b4e-9c79-abb59286dc69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8eba469d9fd510992b983ec5fb91c079631d6193893614f4412aec587b9e9806
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.40124298Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.416578627Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.416626742Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.416706891Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.423909068Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.423941216Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.423975161Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.429743912Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.429781525Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.429804631Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.432888307Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.432927832Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.43295065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.444050437Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.444392488Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d7c6dc21b5fd9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   8eba469d9fd51       storage-provisioner                          kube-system
	c0aea60eb23fb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   ba5775d840f6e       dashboard-metrics-scraper-6ffb444bf9-bhxn6   kubernetes-dashboard
	cc2532c9ae831       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   121c83ba90ff2       kubernetes-dashboard-855c9754f9-97q22        kubernetes-dashboard
	09258c4ef6062       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   a13a156f8b4da       coredns-66bc5c9577-2l449                     kube-system
	1ecc28e1024b3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   98c4e884c8255       busybox                                      default
	496e2fb54178e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   2ca46794c1477       kindnet-qsvh7                                kube-system
	ee0d081abddf1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   8eba469d9fd51       storage-provisioner                          kube-system
	80745c56ff5e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   b45b58a5a10bd       kube-proxy-5d2lb                             kube-system
	8fe33f405cefe       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   06ccbd6f71e02       etcd-embed-certs-404149                      kube-system
	b3bad56f102ba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   d748df7b7a92f       kube-apiserver-embed-certs-404149            kube-system
	9412dd63cbe6e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   e82544de6015b       kube-scheduler-embed-certs-404149            kube-system
	f782a05f34be5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   8f3fcc75f6601       kube-controller-manager-embed-certs-404149   kube-system
	
	
	==> coredns [09258c4ef606211d4569a1f07e1868b18902b874617b2f6556a7c2f17f7edb9d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59520 - 693 "HINFO IN 7809165300182040317.2342679607944230001. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025870828s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-404149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-404149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=embed-certs-404149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_47_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:47:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-404149
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:49:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:48:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-404149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                e5de80db-1b6a-4760-801b-d0fd814d39f6
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-2l449                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-embed-certs-404149                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-qsvh7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-404149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-404149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-5d2lb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-404149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bhxn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-97q22         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m14s              kube-proxy       
	  Normal   Starting                 51s                kube-proxy       
	  Normal   Starting                 2m22s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m22s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m21s              kubelet          Node embed-certs-404149 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m21s              kubelet          Node embed-certs-404149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m21s              kubelet          Node embed-certs-404149 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m17s              node-controller  Node embed-certs-404149 event: Registered Node embed-certs-404149 in Controller
	  Normal   NodeReady                95s                kubelet          Node embed-certs-404149 status is now: NodeReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-404149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-404149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-404149 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                node-controller  Node embed-certs-404149 event: Registered Node embed-certs-404149 in Controller
	
	
	==> dmesg <==
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8fe33f405cefe31c9ab389c51d0c2b2ca0f66c055679053ef5665058df3e4a50] <==
	{"level":"warn","ts":"2025-11-15T11:48:43.030453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.075570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.092337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.137618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.158670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.189251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.204536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.247358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.287876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.314975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.352637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.378408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.406647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.428152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.463701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.497807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.533080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.559303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.575934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.592650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.617661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.650779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.667132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.692896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.751428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52088","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:49:38 up  3:32,  0 user,  load average: 3.46, 3.27, 2.86
	Linux embed-certs-404149 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [496e2fb54178ec02d3986f84953b12a001e15ee7cc882c83e58e00fbd053f25b] <==
	I1115 11:48:46.234344       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:48:46.234572       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 11:48:46.234708       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:48:46.234720       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:48:46.234735       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:48:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:48:46.407825       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:48:46.407852       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:48:46.407861       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:48:46.409036       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:49:16.398411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:49:16.407986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:49:16.408174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:49:16.410256       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1115 11:49:17.907956       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:49:17.908045       1 metrics.go:72] Registering metrics
	I1115 11:49:17.908130       1 controller.go:711] "Syncing nftables rules"
	I1115 11:49:26.400951       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 11:49:26.400987       1 main.go:301] handling current node
	I1115 11:49:36.404965       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 11:49:36.405073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b3bad56f102bafd52e8e47890a2907bc310240d0d6905fdf10422d09d338938d] <==
	I1115 11:48:45.004928       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:48:45.005013       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 11:48:45.005067       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:48:45.005203       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:48:45.005258       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:48:45.029235       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:48:45.029276       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:48:45.029285       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:48:45.029294       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:48:45.037008       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 11:48:45.037110       1 policy_source.go:240] refreshing policies
	I1115 11:48:45.038089       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:48:45.072557       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1115 11:48:45.084392       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:48:45.446088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:48:45.523905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:48:46.117965       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:48:46.290360       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:48:46.387209       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:48:46.424582       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:48:46.591047       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.96.167"}
	I1115 11:48:46.644334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.169.166"}
	I1115 11:48:48.477711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:48:48.824469       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:48:49.088713       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f782a05f34be564eb380a59a1f625d50f0d686d350bc75f48d4e7b5587a399bb] <==
	I1115 11:48:48.473136       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 11:48:48.474587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:48:48.475738       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 11:48:48.475902       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 11:48:48.475987       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 11:48:48.476026       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 11:48:48.476056       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 11:48:48.475836       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:48:48.480041       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 11:48:48.480208       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:48:48.482504       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:48:48.487014       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:48:48.490113       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:48:48.492995       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:48:48.497174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:48:48.497410       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:48:48.500370       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:48:48.500823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:48:48.510108       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 11:48:48.513761       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:48:48.513785       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:48:48.517483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:48:48.517507       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:48:48.517513       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:48:48.519735       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [80745c56ff5e6bc966333b250babd40241909a49ddafe4142822f4aa0c5dfe6e] <==
	I1115 11:48:46.714310       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:48:46.842498       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:48:46.952931       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:48:46.953044       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 11:48:46.956953       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:48:46.994202       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:48:46.994313       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:48:46.999771       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:48:47.000200       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:48:47.000451       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:48:47.002072       1 config.go:200] "Starting service config controller"
	I1115 11:48:47.002169       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:48:47.002212       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:48:47.002242       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:48:47.002288       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:48:47.002316       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:48:47.003103       1 config.go:309] "Starting node config controller"
	I1115 11:48:47.006010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:48:47.006108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:48:47.107534       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:48:47.112893       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:48:47.103309       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9412dd63cbe6ee0643666a35f225412ac451380045d2849d5220158a0db17940] <==
	I1115 11:48:43.733028       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:48:47.401254       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 11:48:47.401418       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:48:47.406730       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:48:47.407092       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:48:47.418858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:48:47.407107       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:48:47.435380       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:48:47.407123       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:48:47.407067       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 11:48:47.439425       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 11:48:47.519784       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:48:47.539558       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 11:48:47.539687       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:48:49 embed-certs-404149 kubelet[778]: I1115 11:48:49.057709     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7e00fda2-305a-44d4-aab6-5f9f7f148936-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-bhxn6\" (UID: \"7e00fda2-305a-44d4-aab6-5f9f7f148936\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6"
	Nov 15 11:48:49 embed-certs-404149 kubelet[778]: W1115 11:48:49.263484     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/crio-ba5775d840f6e92b87b5cfa8663c6fae7b4012bdb19ae4e43de587b7e2003bb6 WatchSource:0}: Error finding container ba5775d840f6e92b87b5cfa8663c6fae7b4012bdb19ae4e43de587b7e2003bb6: Status 404 returned error can't find the container with id ba5775d840f6e92b87b5cfa8663c6fae7b4012bdb19ae4e43de587b7e2003bb6
	Nov 15 11:48:49 embed-certs-404149 kubelet[778]: W1115 11:48:49.287957     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/crio-121c83ba90ff2aa238ddb2a4292316ccf7eca02b4c26057e82bb51c46cbcac30 WatchSource:0}: Error finding container 121c83ba90ff2aa238ddb2a4292316ccf7eca02b4c26057e82bb51c46cbcac30: Status 404 returned error can't find the container with id 121c83ba90ff2aa238ddb2a4292316ccf7eca02b4c26057e82bb51c46cbcac30
	Nov 15 11:48:51 embed-certs-404149 kubelet[778]: I1115 11:48:51.103939     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 11:48:54 embed-certs-404149 kubelet[778]: I1115 11:48:54.549390     778 scope.go:117] "RemoveContainer" containerID="7416decca5739ea9282cd272800512c1d9483ca62b7c360aef301f2596509ed3"
	Nov 15 11:48:55 embed-certs-404149 kubelet[778]: I1115 11:48:55.557240     778 scope.go:117] "RemoveContainer" containerID="7416decca5739ea9282cd272800512c1d9483ca62b7c360aef301f2596509ed3"
	Nov 15 11:48:55 embed-certs-404149 kubelet[778]: I1115 11:48:55.557388     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:48:55 embed-certs-404149 kubelet[778]: E1115 11:48:55.557613     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:48:56 embed-certs-404149 kubelet[778]: I1115 11:48:56.561183     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:48:56 embed-certs-404149 kubelet[778]: E1115 11:48:56.561347     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:48:59 embed-certs-404149 kubelet[778]: I1115 11:48:59.243268     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:48:59 embed-certs-404149 kubelet[778]: E1115 11:48:59.244043     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.446266     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.615415     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.616082     778 scope.go:117] "RemoveContainer" containerID="c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: E1115 11:49:12.616422     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.654213     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-97q22" podStartSLOduration=13.520549815 podStartE2EDuration="24.652164005s" podCreationTimestamp="2025-11-15 11:48:48 +0000 UTC" firstStartedPulling="2025-11-15 11:48:49.292161577 +0000 UTC m=+11.096110593" lastFinishedPulling="2025-11-15 11:49:00.423775767 +0000 UTC m=+22.227724783" observedRunningTime="2025-11-15 11:49:00.610167089 +0000 UTC m=+22.414116113" watchObservedRunningTime="2025-11-15 11:49:12.652164005 +0000 UTC m=+34.456113029"
	Nov 15 11:49:16 embed-certs-404149 kubelet[778]: I1115 11:49:16.634568     778 scope.go:117] "RemoveContainer" containerID="ee0d081abddf11575d32cec2a52f4a3d14483c7066159cc5a3b99aa279f76238"
	Nov 15 11:49:19 embed-certs-404149 kubelet[778]: I1115 11:49:19.238148     778 scope.go:117] "RemoveContainer" containerID="c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	Nov 15 11:49:19 embed-certs-404149 kubelet[778]: E1115 11:49:19.238927     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:30 embed-certs-404149 kubelet[778]: I1115 11:49:30.446876     778 scope.go:117] "RemoveContainer" containerID="c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	Nov 15 11:49:30 embed-certs-404149 kubelet[778]: E1115 11:49:30.447508     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:35 embed-certs-404149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:49:35 embed-certs-404149 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:49:35 embed-certs-404149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc2532c9ae8316b0f9a928a64b853c1143cd4bf2cc7096607b847819a61c8908] <==
	2025/11/15 11:49:00 Using namespace: kubernetes-dashboard
	2025/11/15 11:49:00 Using in-cluster config to connect to apiserver
	2025/11/15 11:49:00 Using secret token for csrf signing
	2025/11/15 11:49:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:49:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:49:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 11:49:00 Generating JWE encryption key
	2025/11/15 11:49:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:49:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:49:01 Initializing JWE encryption key from synchronized object
	2025/11/15 11:49:01 Creating in-cluster Sidecar client
	2025/11/15 11:49:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:49:01 Serving insecurely on HTTP port: 9090
	2025/11/15 11:49:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:49:00 Starting overwatch
	
	
	==> storage-provisioner [d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb] <==
	I1115 11:49:16.716028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:49:16.742360       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:49:16.742642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 11:49:16.746087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:20.201764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:24.464051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:28.063202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:31.118075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:34.141429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:34.148781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:49:34.149070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:49:34.149297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-404149_71d63121-c2cc-44b3-b175-3c5389d7ef66!
	I1115 11:49:34.152851       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ea4f04f-64df-44af-afb1-3382b56ac68d", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-404149_71d63121-c2cc-44b3-b175-3c5389d7ef66 became leader
	W1115 11:49:34.162972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:34.169041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:49:34.252664       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-404149_71d63121-c2cc-44b3-b175-3c5389d7ef66!
	W1115 11:49:36.178866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:36.186905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:38.190760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:38.198405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ee0d081abddf11575d32cec2a52f4a3d14483c7066159cc5a3b99aa279f76238] <==
	I1115 11:48:46.436564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:49:16.611577       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-404149 -n embed-certs-404149
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-404149 -n embed-certs-404149: exit status 2 (510.119857ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-404149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-404149
helpers_test.go:243: (dbg) docker inspect embed-certs-404149:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408",
	        "Created": "2025-11-15T11:46:51.97222958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 784416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:48:31.636067246Z",
	            "FinishedAt": "2025-11-15T11:48:30.783477133Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/hostname",
	        "HostsPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/hosts",
	        "LogPath": "/var/lib/docker/containers/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408-json.log",
	        "Name": "/embed-certs-404149",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-404149:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-404149",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408",
	                "LowerDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/499cc6850e7e43e93965ff14ffb04ef4e117996f45283ec5f42c89d1ea43216c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-404149",
	                "Source": "/var/lib/docker/volumes/embed-certs-404149/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-404149",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-404149",
	                "name.minikube.sigs.k8s.io": "embed-certs-404149",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36e6f18627acf3d0af0ec3283356927ad4e178f512b995a769473ae566dcbcb1",
	            "SandboxKey": "/var/run/docker/netns/36e6f18627ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-404149": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:84:70:90:59:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bb35a9e63004fb5710c19eaa0fed0c73a27efd3fdd5fdafde151cb4543696cc",
	                    "EndpointID": "1f5fdfe5bebbc07506b83bf92e0662a88fc4344cfe8f72d8f9d209dbea13e156",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-404149",
	                        "69e998144c08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149: exit status 2 (345.86712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-404149 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-404149 logs -n 25: (2.028206624s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:44 UTC │ 15 Nov 25 11:45 UTC │
	│ image   │ old-k8s-version-872969 image list --format=json                                                                                                                                                                                               │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ pause   │ -p old-k8s-version-872969 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │                     │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-769461 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:49 UTC │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:49:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:49:06.158503  787845 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:49:06.158629  787845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:06.158638  787845 out.go:374] Setting ErrFile to fd 2...
	I1115 11:49:06.158643  787845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:06.158902  787845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:49:06.159296  787845 out.go:368] Setting JSON to false
	I1115 11:49:06.160251  787845 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12697,"bootTime":1763194649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:49:06.160318  787845 start.go:143] virtualization:  
	I1115 11:49:06.164115  787845 out.go:179] * [no-preload-126380] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:49:06.168127  787845 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:49:06.168285  787845 notify.go:221] Checking for updates...
	I1115 11:49:06.174308  787845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:49:06.177412  787845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:49:06.180372  787845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:49:06.183410  787845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:49:06.186354  787845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:49:06.189813  787845 config.go:182] Loaded profile config "embed-certs-404149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:06.190026  787845 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:49:06.221201  787845 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:49:06.221326  787845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:06.281235  787845 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:06.27166405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:06.281350  787845 docker.go:319] overlay module found
	I1115 11:49:06.284585  787845 out.go:179] * Using the docker driver based on user configuration
	W1115 11:49:02.740312  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:04.741077  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:06.287547  787845 start.go:309] selected driver: docker
	I1115 11:49:06.287568  787845 start.go:930] validating driver "docker" against <nil>
	I1115 11:49:06.287582  787845 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:49:06.288338  787845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:06.348646  787845 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:06.339411192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:06.348814  787845 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 11:49:06.349068  787845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:49:06.351906  787845 out.go:179] * Using Docker driver with root privileges
	I1115 11:49:06.354777  787845 cni.go:84] Creating CNI manager for ""
	I1115 11:49:06.354936  787845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:06.354950  787845 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:49:06.355033  787845 start.go:353] cluster config:
	{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:49:06.358064  787845 out.go:179] * Starting "no-preload-126380" primary control-plane node in "no-preload-126380" cluster
	I1115 11:49:06.360929  787845 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:49:06.363946  787845 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:49:06.366920  787845 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:06.366991  787845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:49:06.367051  787845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:49:06.367083  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json: {Name:mk9b4ca08b66711cad2f7c3ab350d005b0392d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:06.367336  787845 cache.go:107] acquiring lock: {Name:mk91726f44286832b0046d8499f5d58ff7ad2b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.367391  787845 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 11:49:06.367399  787845 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.615µs
	I1115 11:49:06.367407  787845 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 11:49:06.367424  787845 cache.go:107] acquiring lock: {Name:mk100238a706e702239a000cdfd80c281f376431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.367489  787845 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:06.367874  787845 cache.go:107] acquiring lock: {Name:mk15eeacf94b66be4392721a733df868bc784101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.367974  787845 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:06.368249  787845 cache.go:107] acquiring lock: {Name:mkb04d459fbb71ba8df962665fc7ab481f00418b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.368343  787845 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:06.368644  787845 cache.go:107] acquiring lock: {Name:mkb69d6ceae6b9540e167400909c918adeec9369 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.368746  787845 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:06.369041  787845 cache.go:107] acquiring lock: {Name:mk10696b84637583e56394b885fa921b6d221577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.369140  787845 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 11:49:06.369427  787845 cache.go:107] acquiring lock: {Name:mk87d816e36c32f87fd55930f6a9d59e6dfc4a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.369553  787845 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:06.369802  787845 cache.go:107] acquiring lock: {Name:mkd034e18ce491e5f4eb3166d5f81cee9da0de03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.369953  787845 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:06.372398  787845 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:06.372894  787845 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 11:49:06.373143  787845 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:06.373465  787845 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:06.373610  787845 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:06.373877  787845 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:06.374082  787845 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:06.398830  787845 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:49:06.398855  787845 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:49:06.398874  787845 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:49:06.398899  787845 start.go:360] acquireMachinesLock for no-preload-126380: {Name:mk5469ab80c2d37eee16becc95c7569af1cc4687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:06.399017  787845 start.go:364] duration metric: took 96.887µs to acquireMachinesLock for "no-preload-126380"
	I1115 11:49:06.399046  787845 start.go:93] Provisioning new machine with config: &{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:49:06.399114  787845 start.go:125] createHost starting for "" (driver="docker")
	I1115 11:49:06.404637  787845 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:49:06.404911  787845 start.go:159] libmachine.API.Create for "no-preload-126380" (driver="docker")
	I1115 11:49:06.404949  787845 client.go:173] LocalClient.Create starting
	I1115 11:49:06.405034  787845 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:49:06.405071  787845 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:06.405087  787845 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:06.405143  787845 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:49:06.405191  787845 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:06.405208  787845 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:06.405688  787845 cli_runner.go:164] Run: docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:49:06.430371  787845 cli_runner.go:211] docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:49:06.430460  787845 network_create.go:284] running [docker network inspect no-preload-126380] to gather additional debugging logs...
	I1115 11:49:06.430483  787845 cli_runner.go:164] Run: docker network inspect no-preload-126380
	W1115 11:49:06.447951  787845 cli_runner.go:211] docker network inspect no-preload-126380 returned with exit code 1
	I1115 11:49:06.447981  787845 network_create.go:287] error running [docker network inspect no-preload-126380]: docker network inspect no-preload-126380: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-126380 not found
	I1115 11:49:06.448009  787845 network_create.go:289] output of [docker network inspect no-preload-126380]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-126380 not found
	
	** /stderr **
	I1115 11:49:06.448099  787845 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:49:06.464200  787845 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:49:06.464545  787845 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:49:06.465024  787845 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:49:06.465435  787845 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7bb35a9e6300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:2f:88:7f:d7:d9} reservation:<nil>}
	I1115 11:49:06.466375  787845 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bcbf50}
	I1115 11:49:06.466451  787845 network_create.go:124] attempt to create docker network no-preload-126380 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 11:49:06.466541  787845 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-126380 no-preload-126380
	I1115 11:49:06.542635  787845 network_create.go:108] docker network no-preload-126380 192.168.85.0/24 created
	I1115 11:49:06.542669  787845 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-126380" container
	I1115 11:49:06.542741  787845 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:49:06.559981  787845 cli_runner.go:164] Run: docker volume create no-preload-126380 --label name.minikube.sigs.k8s.io=no-preload-126380 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:49:06.577852  787845 oci.go:103] Successfully created a docker volume no-preload-126380
	I1115 11:49:06.577938  787845 cli_runner.go:164] Run: docker run --rm --name no-preload-126380-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-126380 --entrypoint /usr/bin/test -v no-preload-126380:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:49:06.729865  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 11:49:06.740708  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 11:49:06.747220  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1115 11:49:06.747884  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 11:49:06.813067  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1115 11:49:06.813140  787845 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 444.102936ms
	I1115 11:49:06.813175  787845 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 11:49:06.824076  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 11:49:06.830279  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 11:49:06.836706  787845 cache.go:162] opening:  /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1115 11:49:07.193812  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 11:49:07.193880  787845 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 825.239178ms
	I1115 11:49:07.193906  787845 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 11:49:07.242096  787845 oci.go:107] Successfully prepared a docker volume no-preload-126380
	I1115 11:49:07.242139  787845 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1115 11:49:07.242270  787845 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:49:07.242461  787845 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:49:07.301859  787845 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-126380 --name no-preload-126380 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-126380 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-126380 --network no-preload-126380 --ip 192.168.85.2 --volume no-preload-126380:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:49:07.726790  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 11:49:07.726867  787845 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.357069835s
	I1115 11:49:07.726898  787845 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 11:49:07.728287  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 11:49:07.728325  787845 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.360079098s
	I1115 11:49:07.728335  787845 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 11:49:07.761573  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Running}}
	I1115 11:49:07.825004  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:07.833645  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 11:49:07.833832  787845 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.465959812s
	I1115 11:49:07.833862  787845 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 11:49:07.884006  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 11:49:07.884039  787845 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.516620411s
	I1115 11:49:07.884052  787845 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 11:49:07.911472  787845 cli_runner.go:164] Run: docker exec no-preload-126380 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:49:08.025147  787845 oci.go:144] the created container "no-preload-126380" has a running status.
	I1115 11:49:08.025190  787845 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa...
	I1115 11:49:08.475671  787845 cache.go:157] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 11:49:08.478018  787845 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.108562311s
	I1115 11:49:08.478058  787845 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 11:49:08.478224  787845 cache.go:87] Successfully saved all images to host disk.
	I1115 11:49:08.679324  787845 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:49:08.708224  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:08.730008  787845 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:49:08.730031  787845 kic_runner.go:114] Args: [docker exec --privileged no-preload-126380 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:49:08.802822  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:08.820387  787845 machine.go:94] provisionDockerMachine start ...
	I1115 11:49:08.822839  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:08.843712  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:08.844040  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:08.844050  787845 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:49:09.009342  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:49:09.009416  787845 ubuntu.go:182] provisioning hostname "no-preload-126380"
	I1115 11:49:09.009505  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:09.031757  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:09.032113  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:09.032130  787845 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-126380 && echo "no-preload-126380" | sudo tee /etc/hostname
	I1115 11:49:09.243819  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:49:09.243964  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:09.264313  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:09.264758  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:09.264816  787845 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-126380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-126380/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-126380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:49:09.425237  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:49:09.425260  787845 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:49:09.425290  787845 ubuntu.go:190] setting up certificates
	I1115 11:49:09.425301  787845 provision.go:84] configureAuth start
	I1115 11:49:09.425360  787845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:49:09.444799  787845 provision.go:143] copyHostCerts
	I1115 11:49:09.444972  787845 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:49:09.444989  787845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:49:09.445075  787845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:49:09.445184  787845 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:49:09.445195  787845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:49:09.445224  787845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:49:09.445285  787845 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:49:09.445294  787845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:49:09.445318  787845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:49:09.445368  787845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.no-preload-126380 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-126380]
	I1115 11:49:09.872630  787845 provision.go:177] copyRemoteCerts
	I1115 11:49:09.872700  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:49:09.872753  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:09.890843  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.007674  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:49:10.031009  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:49:10.050550  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:49:10.069317  787845 provision.go:87] duration metric: took 643.993558ms to configureAuth
	I1115 11:49:10.069351  787845 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:49:10.069542  787845 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:10.069656  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.088205  787845 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:10.088538  787845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 11:49:10.088572  787845 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:49:10.435553  787845 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:49:10.435575  787845 machine.go:97] duration metric: took 1.615164588s to provisionDockerMachine
	I1115 11:49:10.435585  787845 client.go:176] duration metric: took 4.030626607s to LocalClient.Create
	I1115 11:49:10.435604  787845 start.go:167] duration metric: took 4.030695465s to libmachine.API.Create "no-preload-126380"
	I1115 11:49:10.435612  787845 start.go:293] postStartSetup for "no-preload-126380" (driver="docker")
	I1115 11:49:10.435622  787845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:49:10.435700  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:49:10.435743  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.456713  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.565104  787845 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:49:10.568420  787845 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:49:10.568449  787845 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:49:10.568460  787845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:49:10.568515  787845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:49:10.568606  787845 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:49:10.568716  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:49:10.576120  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:49:10.594171  787845 start.go:296] duration metric: took 158.543458ms for postStartSetup
	I1115 11:49:10.594583  787845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:49:10.613997  787845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:49:10.614283  787845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:49:10.614337  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.630885  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.734151  787845 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:49:10.744102  787845 start.go:128] duration metric: took 4.344972701s to createHost
	I1115 11:49:10.744130  787845 start.go:83] releasing machines lock for "no-preload-126380", held for 4.345100982s
	I1115 11:49:10.744204  787845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:49:10.763549  787845 ssh_runner.go:195] Run: cat /version.json
	I1115 11:49:10.763604  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.763848  787845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:49:10.763916  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:10.783083  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.793000  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:10.888893  787845 ssh_runner.go:195] Run: systemctl --version
	I1115 11:49:10.981372  787845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:49:11.027234  787845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:49:11.031620  787845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:49:11.031744  787845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:49:11.062333  787845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:49:11.062356  787845 start.go:496] detecting cgroup driver to use...
	I1115 11:49:11.062391  787845 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:49:11.062446  787845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:49:11.081123  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:49:11.095421  787845 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:49:11.095545  787845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:49:11.117678  787845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:49:11.137527  787845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1115 11:49:07.241810  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:09.741868  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:11.264935  787845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:49:11.388357  787845 docker.go:234] disabling docker service ...
	I1115 11:49:11.388542  787845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:49:11.414441  787845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:49:11.429701  787845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:49:11.548903  787845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:49:11.685776  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:49:11.700271  787845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:49:11.715290  787845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:49:11.715358  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.725424  787845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:49:11.725534  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.736389  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.750075  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.760292  787845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:49:11.769000  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.778072  787845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.792073  787845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:49:11.801814  787845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:49:11.809649  787845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:49:11.817594  787845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:49:11.930348  787845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:49:12.055294  787845 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:49:12.055410  787845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:49:12.059332  787845 start.go:564] Will wait 60s for crictl version
	I1115 11:49:12.059393  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.063097  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:49:12.091898  787845 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:49:12.092049  787845 ssh_runner.go:195] Run: crio --version
	I1115 11:49:12.122529  787845 ssh_runner.go:195] Run: crio --version
	I1115 11:49:12.156121  787845 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:49:12.158988  787845 cli_runner.go:164] Run: docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:49:12.174824  787845 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:49:12.178523  787845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:49:12.188129  787845 kubeadm.go:884] updating cluster {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:49:12.188243  787845 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:12.188293  787845 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:49:12.213236  787845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 11:49:12.213262  787845 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1115 11:49:12.213308  787845 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:12.213336  787845 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.213506  787845 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.213515  787845 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 11:49:12.213606  787845 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.213610  787845 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.213701  787845 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.213707  787845 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.215562  787845 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.215837  787845 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 11:49:12.216054  787845 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.216247  787845 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:12.216286  787845 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.216468  787845 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.216599  787845 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.216727  787845 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.466521  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.466632  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.472640  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.474837  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.483462  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1115 11:49:12.485706  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.517230  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.654157  787845 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1115 11:49:12.654254  787845 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.654349  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.654522  787845 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1115 11:49:12.654603  787845 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.654666  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.656565  787845 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1115 11:49:12.656709  787845 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.656796  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.678249  787845 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1115 11:49:12.678497  787845 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.678370  787845 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1115 11:49:12.678558  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.678591  787845 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1115 11:49:12.678649  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.678468  787845 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1115 11:49:12.678713  787845 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.678748  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.689020  787845 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1115 11:49:12.689063  787845 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.689120  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:12.689197  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.689230  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.689293  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.689325  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.689363  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 11:49:12.689199  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.735583  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.737093  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.811531  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.811706  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 11:49:12.811803  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.811888  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.837777  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.847590  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 11:49:12.847711  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.910223  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 11:49:12.910328  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 11:49:12.910429  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 11:49:12.910460  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 11:49:12.957830  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 11:49:12.975297  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 11:49:12.975372  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 11:49:12.975450  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 11:49:13.027686  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1115 11:49:13.027856  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1115 11:49:13.028062  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 11:49:13.028171  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 11:49:13.027935  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1115 11:49:13.028321  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1115 11:49:13.027990  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 11:49:13.028467  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 11:49:13.051341  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 11:49:13.051451  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1115 11:49:13.051452  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1115 11:49:13.051510  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1115 11:49:13.051548  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 11:49:13.051611  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1115 11:49:13.051630  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1115 11:49:13.051682  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1115 11:49:13.051700  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1115 11:49:13.051740  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 11:49:13.051760  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1115 11:49:13.051589  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1115 11:49:13.051910  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1115 11:49:13.051912  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1115 11:49:13.076630  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1115 11:49:13.076724  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1115 11:49:13.076844  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1115 11:49:13.076940  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1115 11:49:13.135004  787845 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1115 11:49:13.135163  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1115 11:49:13.561620  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1115 11:49:13.561698  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 11:49:13.561777  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1115 11:49:13.680914  787845 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1115 11:49:13.681159  787845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:15.305713  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.743894953s)
	I1115 11:49:15.305776  787845 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.624571011s)
	I1115 11:49:15.305801  787845 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1115 11:49:15.305839  787845 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:15.305892  787845 ssh_runner.go:195] Run: which crictl
	I1115 11:49:15.305955  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1115 11:49:15.305982  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 11:49:15.306009  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1115 11:49:12.240567  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:14.741510  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:16.636929  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.330899294s)
	I1115 11:49:16.636953  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1115 11:49:16.636970  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 11:49:16.637016  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 11:49:16.637076  787845 ssh_runner.go:235] Completed: which crictl: (1.331171608s)
	I1115 11:49:16.637109  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:16.709368  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:18.105639  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.468600984s)
	I1115 11:49:18.105667  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1115 11:49:18.105686  787845 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 11:49:18.105734  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 11:49:18.105805  787845 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.396413073s)
	I1115 11:49:18.105846  787845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:19.434756  787845 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.328881511s)
	I1115 11:49:19.434808  787845 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1115 11:49:19.434903  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1115 11:49:19.434955  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.329200554s)
	I1115 11:49:19.434975  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1115 11:49:19.434997  787845 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1115 11:49:19.435040  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	W1115 11:49:16.742216  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	W1115 11:49:19.249257  784287 pod_ready.go:104] pod "coredns-66bc5c9577-2l449" is not "Ready", error: <nil>
	I1115 11:49:21.241932  784287 pod_ready.go:94] pod "coredns-66bc5c9577-2l449" is "Ready"
	I1115 11:49:21.241965  784287 pod_ready.go:86] duration metric: took 34.007639528s for pod "coredns-66bc5c9577-2l449" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.244954  784287 pod_ready.go:83] waiting for pod "etcd-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.250765  784287 pod_ready.go:94] pod "etcd-embed-certs-404149" is "Ready"
	I1115 11:49:21.250801  784287 pod_ready.go:86] duration metric: took 5.810556ms for pod "etcd-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.254481  784287 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.261741  784287 pod_ready.go:94] pod "kube-apiserver-embed-certs-404149" is "Ready"
	I1115 11:49:21.261774  784287 pod_ready.go:86] duration metric: took 7.259236ms for pod "kube-apiserver-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.264695  784287 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.440031  784287 pod_ready.go:94] pod "kube-controller-manager-embed-certs-404149" is "Ready"
	I1115 11:49:21.440067  784287 pod_ready.go:86] duration metric: took 175.338405ms for pod "kube-controller-manager-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:21.639338  784287 pod_ready.go:83] waiting for pod "kube-proxy-5d2lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.039530  784287 pod_ready.go:94] pod "kube-proxy-5d2lb" is "Ready"
	I1115 11:49:22.039574  784287 pod_ready.go:86] duration metric: took 400.202486ms for pod "kube-proxy-5d2lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.238924  784287 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.639015  784287 pod_ready.go:94] pod "kube-scheduler-embed-certs-404149" is "Ready"
	I1115 11:49:22.639047  784287 pod_ready.go:86] duration metric: took 400.093404ms for pod "kube-scheduler-embed-certs-404149" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:49:22.639060  784287 pod_ready.go:40] duration metric: took 35.408844515s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:49:22.727121  784287 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:49:22.731768  784287 out.go:179] * Done! kubectl is now configured to use "embed-certs-404149" cluster and "default" namespace by default
	I1115 11:49:21.184103  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.749043738s)
	I1115 11:49:21.184133  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1115 11:49:21.184138  787845 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.749216605s)
	I1115 11:49:21.184152  787845 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1115 11:49:21.184161  787845 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1115 11:49:21.184184  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1115 11:49:21.184201  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1115 11:49:25.115546  787845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.931323402s)
	I1115 11:49:25.115578  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1115 11:49:25.115596  787845 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1115 11:49:25.115646  787845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1115 11:49:25.743592  787845 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1115 11:49:25.743625  787845 cache_images.go:125] Successfully loaded all cached images
	I1115 11:49:25.743631  787845 cache_images.go:94] duration metric: took 13.530354185s to LoadCachedImages
	I1115 11:49:25.743643  787845 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 11:49:25.743732  787845 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-126380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:49:25.743819  787845 ssh_runner.go:195] Run: crio config
	I1115 11:49:25.809912  787845 cni.go:84] Creating CNI manager for ""
	I1115 11:49:25.809937  787845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:25.809953  787845 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:49:25.809977  787845 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-126380 NodeName:no-preload-126380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:49:25.810324  787845 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-126380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:49:25.810414  787845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:49:25.823131  787845 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1115 11:49:25.823199  787845 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1115 11:49:25.831459  787845 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1115 11:49:25.831550  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1115 11:49:25.832500  787845 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1115 11:49:25.832503  787845 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1115 11:49:25.836233  787845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1115 11:49:25.836268  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1115 11:49:26.669444  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:49:26.693463  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1115 11:49:26.701767  787845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1115 11:49:26.701804  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1115 11:49:26.790692  787845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1115 11:49:26.808783  787845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1115 11:49:26.808897  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1115 11:49:27.339578  787845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:49:27.347559  787845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:49:27.361750  787845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:49:27.374655  787845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 11:49:27.391923  787845 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:49:27.395426  787845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:49:27.405010  787845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:49:27.526905  787845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:49:27.544569  787845 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380 for IP: 192.168.85.2
	I1115 11:49:27.544588  787845 certs.go:195] generating shared ca certs ...
	I1115 11:49:27.544614  787845 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:27.544754  787845 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:49:27.544794  787845 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:49:27.544801  787845 certs.go:257] generating profile certs ...
	I1115 11:49:27.544958  787845 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key
	I1115 11:49:27.544975  787845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt with IP's: []
	I1115 11:49:27.960655  787845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt ...
	I1115 11:49:27.960688  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: {Name:mk40d5f9049445c76d7ff12fc64f93eb3900925d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:27.960898  787845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key ...
	I1115 11:49:27.960911  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key: {Name:mkf193e03cbd780b09ed1a5bc0b40e4fdb1d3987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:27.961014  787845 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb
	I1115 11:49:27.961030  787845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 11:49:28.319180  787845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb ...
	I1115 11:49:28.319214  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb: {Name:mkf9e268be0128d91467436a8d4d4b86b7104140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.319402  787845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb ...
	I1115 11:49:28.319416  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb: {Name:mkebef29ef024ee0a65394a2500f7f9420bbb238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.319495  787845 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt.d85d6acb -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt
	I1115 11:49:28.319574  787845 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key
	I1115 11:49:28.319634  787845 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key
	I1115 11:49:28.319650  787845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt with IP's: []
	I1115 11:49:28.737729  787845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt ...
	I1115 11:49:28.737760  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt: {Name:mk2482c56b63a21a5d9bea5eecaefa4ad9a4649e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.737949  787845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key ...
	I1115 11:49:28.737962  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key: {Name:mka42158b2d97a744a1695a70b24050ff2a02587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:28.738155  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:49:28.738199  787845 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:49:28.738215  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:49:28.738245  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:49:28.738273  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:49:28.738301  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:49:28.738346  787845 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:49:28.738921  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:49:28.757998  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:49:28.776793  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:49:28.794451  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:49:28.812436  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:49:28.831976  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:49:28.851065  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:49:28.869009  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:49:28.888022  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:49:28.906375  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:49:28.924079  787845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:49:28.942219  787845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:49:28.955319  787845 ssh_runner.go:195] Run: openssl version
	I1115 11:49:28.961550  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:49:28.970021  787845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:49:28.974470  787845 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:49:28.974538  787845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:49:29.015757  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:49:29.024426  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:49:29.033019  787845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:49:29.036746  787845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:49:29.036809  787845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:49:29.077567  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:49:29.085981  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:49:29.094929  787845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:49:29.098711  787845 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:49:29.098809  787845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:49:29.144442  787845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:49:29.152996  787845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:49:29.156741  787845 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:49:29.156796  787845 kubeadm.go:401] StartCluster: {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:49:29.157018  787845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:49:29.157082  787845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:49:29.198212  787845 cri.go:89] found id: ""
	I1115 11:49:29.198334  787845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:49:29.207223  787845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:49:29.216060  787845 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:49:29.216157  787845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:49:29.227027  787845 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:49:29.227049  787845 kubeadm.go:158] found existing configuration files:
	
	I1115 11:49:29.227114  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:49:29.237799  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:49:29.237878  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:49:29.245289  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:49:29.255742  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:49:29.255840  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:49:29.263853  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:49:29.271633  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:49:29.271702  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:49:29.279788  787845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:49:29.287191  787845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:49:29.287295  787845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:49:29.294741  787845 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:49:29.337847  787845 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 11:49:29.337916  787845 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:49:29.359637  787845 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:49:29.359723  787845 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:49:29.359766  787845 kubeadm.go:319] OS: Linux
	I1115 11:49:29.359824  787845 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:49:29.359884  787845 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:49:29.359937  787845 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:49:29.359992  787845 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:49:29.360047  787845 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:49:29.360102  787845 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:49:29.360154  787845 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:49:29.360208  787845 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:49:29.360259  787845 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:49:29.446201  787845 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:49:29.446322  787845 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:49:29.446421  787845 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 11:49:29.468848  787845 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 11:49:29.475576  787845 out.go:252]   - Generating certificates and keys ...
	I1115 11:49:29.475772  787845 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:49:29.475907  787845 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:49:29.943730  787845 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:49:30.614103  787845 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 11:49:31.169774  787845 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 11:49:32.518859  787845 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:49:33.054537  787845 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:49:33.054921  787845 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-126380] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 11:49:33.232654  787845 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:49:33.233091  787845 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-126380] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 11:49:33.425384  787845 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:49:33.675606  787845 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:49:33.909070  787845 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:49:33.909406  787845 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:49:34.841037  787845 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:49:35.216684  787845 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 11:49:35.825940  787845 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:49:36.505331  787845 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:49:36.880137  787845 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:49:36.881966  787845 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:49:36.889040  787845 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.6393187Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=607b5834-8407-413a-8f7c-75835d05d699 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.6517989Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f043b3b6-329f-463c-9176-226c69669912 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.651911639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665012629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665276484Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0662ec9a703106e23bd2dc9d61e5a2f020a180bb8d541a5b6f7638311c4dfb07/merged/etc/passwd: no such file or directory"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665301453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0662ec9a703106e23bd2dc9d61e5a2f020a180bb8d541a5b6f7638311c4dfb07/merged/etc/group: no such file or directory"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.665563216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.69165216Z" level=info msg="Created container d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb: kube-system/storage-provisioner/storage-provisioner" id=f043b3b6-329f-463c-9176-226c69669912 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.693036167Z" level=info msg="Starting container: d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb" id=c2ad4da9-23e6-4b4e-9c79-abb59286dc69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:49:16 embed-certs-404149 crio[651]: time="2025-11-15T11:49:16.702453105Z" level=info msg="Started container" PID=1631 containerID=d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb description=kube-system/storage-provisioner/storage-provisioner id=c2ad4da9-23e6-4b4e-9c79-abb59286dc69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8eba469d9fd510992b983ec5fb91c079631d6193893614f4412aec587b9e9806
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.40124298Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.416578627Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.416626742Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.416706891Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.423909068Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.423941216Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.423975161Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.429743912Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.429781525Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.429804631Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.432888307Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.432927832Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.43295065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.444050437Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:49:26 embed-certs-404149 crio[651]: time="2025-11-15T11:49:26.444392488Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d7c6dc21b5fd9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   8eba469d9fd51       storage-provisioner                          kube-system
	c0aea60eb23fb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   ba5775d840f6e       dashboard-metrics-scraper-6ffb444bf9-bhxn6   kubernetes-dashboard
	cc2532c9ae831       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   121c83ba90ff2       kubernetes-dashboard-855c9754f9-97q22        kubernetes-dashboard
	09258c4ef6062       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   a13a156f8b4da       coredns-66bc5c9577-2l449                     kube-system
	1ecc28e1024b3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   98c4e884c8255       busybox                                      default
	496e2fb54178e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   2ca46794c1477       kindnet-qsvh7                                kube-system
	ee0d081abddf1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   8eba469d9fd51       storage-provisioner                          kube-system
	80745c56ff5e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   b45b58a5a10bd       kube-proxy-5d2lb                             kube-system
	8fe33f405cefe       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   06ccbd6f71e02       etcd-embed-certs-404149                      kube-system
	b3bad56f102ba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   d748df7b7a92f       kube-apiserver-embed-certs-404149            kube-system
	9412dd63cbe6e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e82544de6015b       kube-scheduler-embed-certs-404149            kube-system
	f782a05f34be5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8f3fcc75f6601       kube-controller-manager-embed-certs-404149   kube-system
	
	
	==> coredns [09258c4ef606211d4569a1f07e1868b18902b874617b2f6556a7c2f17f7edb9d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59520 - 693 "HINFO IN 7809165300182040317.2342679607944230001. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025870828s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-404149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-404149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=embed-certs-404149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_47_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:47:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-404149
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:49:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:47:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:49:15 +0000   Sat, 15 Nov 2025 11:48:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-404149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                e5de80db-1b6a-4760-801b-d0fd814d39f6
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-2l449                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-404149                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-qsvh7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-404149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-embed-certs-404149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-5d2lb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-embed-certs-404149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bhxn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-97q22         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m17s              kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 2m25s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m24s              kubelet          Node embed-certs-404149 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m24s              kubelet          Node embed-certs-404149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m24s              kubelet          Node embed-certs-404149 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m20s              node-controller  Node embed-certs-404149 event: Registered Node embed-certs-404149 in Controller
	  Normal   NodeReady                98s                kubelet          Node embed-certs-404149 status is now: NodeReady
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node embed-certs-404149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node embed-certs-404149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node embed-certs-404149 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node embed-certs-404149 event: Registered Node embed-certs-404149 in Controller
	
	
	==> dmesg <==
	[Nov15 11:26] overlayfs: idmapped layers are currently not supported
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8fe33f405cefe31c9ab389c51d0c2b2ca0f66c055679053ef5665058df3e4a50] <==
	{"level":"warn","ts":"2025-11-15T11:48:43.030453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.075570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.092337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.137618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.158670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.189251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.204536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.247358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.287876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.314975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.352637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.378408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.406647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.428152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.463701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.497807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.533080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.559303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.575934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.592650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.617661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.650779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.667132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.692896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:48:43.751428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52088","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:49:41 up  3:32,  0 user,  load average: 3.50, 3.28, 2.86
	Linux embed-certs-404149 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [496e2fb54178ec02d3986f84953b12a001e15ee7cc882c83e58e00fbd053f25b] <==
	I1115 11:48:46.234344       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:48:46.234572       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 11:48:46.234708       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:48:46.234720       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:48:46.234735       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:48:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:48:46.407825       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:48:46.407852       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:48:46.407861       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:48:46.409036       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:49:16.398411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:49:16.407986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 11:49:16.408174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:49:16.410256       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1115 11:49:17.907956       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:49:17.908045       1 metrics.go:72] Registering metrics
	I1115 11:49:17.908130       1 controller.go:711] "Syncing nftables rules"
	I1115 11:49:26.400951       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 11:49:26.400987       1 main.go:301] handling current node
	I1115 11:49:36.404965       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 11:49:36.405073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b3bad56f102bafd52e8e47890a2907bc310240d0d6905fdf10422d09d338938d] <==
	I1115 11:48:45.004928       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:48:45.005013       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 11:48:45.005067       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:48:45.005203       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 11:48:45.005258       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:48:45.029235       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:48:45.029276       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:48:45.029285       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:48:45.029294       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:48:45.037008       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 11:48:45.037110       1 policy_source.go:240] refreshing policies
	I1115 11:48:45.038089       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:48:45.072557       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1115 11:48:45.084392       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:48:45.446088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:48:45.523905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:48:46.117965       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:48:46.290360       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:48:46.387209       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:48:46.424582       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:48:46.591047       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.96.167"}
	I1115 11:48:46.644334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.169.166"}
	I1115 11:48:48.477711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:48:48.824469       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:48:49.088713       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f782a05f34be564eb380a59a1f625d50f0d686d350bc75f48d4e7b5587a399bb] <==
	I1115 11:48:48.473136       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 11:48:48.474587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:48:48.475738       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 11:48:48.475902       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 11:48:48.475987       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 11:48:48.476026       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 11:48:48.476056       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 11:48:48.475836       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:48:48.480041       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 11:48:48.480208       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:48:48.482504       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:48:48.487014       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:48:48.490113       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:48:48.492995       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:48:48.497174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:48:48.497410       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:48:48.500370       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:48:48.500823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:48:48.510108       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 11:48:48.513761       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:48:48.513785       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:48:48.517483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:48:48.517507       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:48:48.517513       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:48:48.519735       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [80745c56ff5e6bc966333b250babd40241909a49ddafe4142822f4aa0c5dfe6e] <==
	I1115 11:48:46.714310       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:48:46.842498       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:48:46.952931       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:48:46.953044       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 11:48:46.956953       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:48:46.994202       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:48:46.994313       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:48:46.999771       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:48:47.000200       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:48:47.000451       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:48:47.002072       1 config.go:200] "Starting service config controller"
	I1115 11:48:47.002169       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:48:47.002212       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:48:47.002242       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:48:47.002288       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:48:47.002316       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:48:47.003103       1 config.go:309] "Starting node config controller"
	I1115 11:48:47.006010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:48:47.006108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:48:47.107534       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:48:47.112893       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:48:47.103309       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9412dd63cbe6ee0643666a35f225412ac451380045d2849d5220158a0db17940] <==
	I1115 11:48:43.733028       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:48:47.401254       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 11:48:47.401418       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:48:47.406730       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:48:47.407092       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:48:47.418858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:48:47.407107       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:48:47.435380       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:48:47.407123       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:48:47.407067       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 11:48:47.439425       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 11:48:47.519784       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:48:47.539558       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 11:48:47.539687       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:48:49 embed-certs-404149 kubelet[778]: I1115 11:48:49.057709     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7e00fda2-305a-44d4-aab6-5f9f7f148936-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-bhxn6\" (UID: \"7e00fda2-305a-44d4-aab6-5f9f7f148936\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6"
	Nov 15 11:48:49 embed-certs-404149 kubelet[778]: W1115 11:48:49.263484     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/crio-ba5775d840f6e92b87b5cfa8663c6fae7b4012bdb19ae4e43de587b7e2003bb6 WatchSource:0}: Error finding container ba5775d840f6e92b87b5cfa8663c6fae7b4012bdb19ae4e43de587b7e2003bb6: Status 404 returned error can't find the container with id ba5775d840f6e92b87b5cfa8663c6fae7b4012bdb19ae4e43de587b7e2003bb6
	Nov 15 11:48:49 embed-certs-404149 kubelet[778]: W1115 11:48:49.287957     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/69e998144c087b2c7aa1ad9cf9bc75854cee374510149d67d2dd4f348773a408/crio-121c83ba90ff2aa238ddb2a4292316ccf7eca02b4c26057e82bb51c46cbcac30 WatchSource:0}: Error finding container 121c83ba90ff2aa238ddb2a4292316ccf7eca02b4c26057e82bb51c46cbcac30: Status 404 returned error can't find the container with id 121c83ba90ff2aa238ddb2a4292316ccf7eca02b4c26057e82bb51c46cbcac30
	Nov 15 11:48:51 embed-certs-404149 kubelet[778]: I1115 11:48:51.103939     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 11:48:54 embed-certs-404149 kubelet[778]: I1115 11:48:54.549390     778 scope.go:117] "RemoveContainer" containerID="7416decca5739ea9282cd272800512c1d9483ca62b7c360aef301f2596509ed3"
	Nov 15 11:48:55 embed-certs-404149 kubelet[778]: I1115 11:48:55.557240     778 scope.go:117] "RemoveContainer" containerID="7416decca5739ea9282cd272800512c1d9483ca62b7c360aef301f2596509ed3"
	Nov 15 11:48:55 embed-certs-404149 kubelet[778]: I1115 11:48:55.557388     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:48:55 embed-certs-404149 kubelet[778]: E1115 11:48:55.557613     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:48:56 embed-certs-404149 kubelet[778]: I1115 11:48:56.561183     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:48:56 embed-certs-404149 kubelet[778]: E1115 11:48:56.561347     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:48:59 embed-certs-404149 kubelet[778]: I1115 11:48:59.243268     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:48:59 embed-certs-404149 kubelet[778]: E1115 11:48:59.244043     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.446266     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.615415     778 scope.go:117] "RemoveContainer" containerID="d589d7243aa3e7a252e8e9c761451a2fc810f89b372a4dbbfc5ea46ff42010d2"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.616082     778 scope.go:117] "RemoveContainer" containerID="c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: E1115 11:49:12.616422     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:12 embed-certs-404149 kubelet[778]: I1115 11:49:12.654213     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-97q22" podStartSLOduration=13.520549815 podStartE2EDuration="24.652164005s" podCreationTimestamp="2025-11-15 11:48:48 +0000 UTC" firstStartedPulling="2025-11-15 11:48:49.292161577 +0000 UTC m=+11.096110593" lastFinishedPulling="2025-11-15 11:49:00.423775767 +0000 UTC m=+22.227724783" observedRunningTime="2025-11-15 11:49:00.610167089 +0000 UTC m=+22.414116113" watchObservedRunningTime="2025-11-15 11:49:12.652164005 +0000 UTC m=+34.456113029"
	Nov 15 11:49:16 embed-certs-404149 kubelet[778]: I1115 11:49:16.634568     778 scope.go:117] "RemoveContainer" containerID="ee0d081abddf11575d32cec2a52f4a3d14483c7066159cc5a3b99aa279f76238"
	Nov 15 11:49:19 embed-certs-404149 kubelet[778]: I1115 11:49:19.238148     778 scope.go:117] "RemoveContainer" containerID="c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	Nov 15 11:49:19 embed-certs-404149 kubelet[778]: E1115 11:49:19.238927     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:30 embed-certs-404149 kubelet[778]: I1115 11:49:30.446876     778 scope.go:117] "RemoveContainer" containerID="c0aea60eb23fba11411e06b64480c0864177c8fd4eb87503bd582fb506b554c1"
	Nov 15 11:49:30 embed-certs-404149 kubelet[778]: E1115 11:49:30.447508     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhxn6_kubernetes-dashboard(7e00fda2-305a-44d4-aab6-5f9f7f148936)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhxn6" podUID="7e00fda2-305a-44d4-aab6-5f9f7f148936"
	Nov 15 11:49:35 embed-certs-404149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:49:35 embed-certs-404149 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:49:35 embed-certs-404149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc2532c9ae8316b0f9a928a64b853c1143cd4bf2cc7096607b847819a61c8908] <==
	2025/11/15 11:49:00 Using namespace: kubernetes-dashboard
	2025/11/15 11:49:00 Using in-cluster config to connect to apiserver
	2025/11/15 11:49:00 Using secret token for csrf signing
	2025/11/15 11:49:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:49:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:49:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 11:49:00 Generating JWE encryption key
	2025/11/15 11:49:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:49:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:49:01 Initializing JWE encryption key from synchronized object
	2025/11/15 11:49:01 Creating in-cluster Sidecar client
	2025/11/15 11:49:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:49:01 Serving insecurely on HTTP port: 9090
	2025/11/15 11:49:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:49:00 Starting overwatch
	
	
	==> storage-provisioner [d7c6dc21b5fd9f658868453f2c488629e112e772ea09f0596e980ba333d294cb] <==
	I1115 11:49:16.716028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:49:16.742360       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:49:16.742642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 11:49:16.746087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:20.201764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:24.464051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:28.063202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:31.118075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:34.141429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:34.148781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:49:34.149070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:49:34.149297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-404149_71d63121-c2cc-44b3-b175-3c5389d7ef66!
	I1115 11:49:34.152851       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ea4f04f-64df-44af-afb1-3382b56ac68d", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-404149_71d63121-c2cc-44b3-b175-3c5389d7ef66 became leader
	W1115 11:49:34.162972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:34.169041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:49:34.252664       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-404149_71d63121-c2cc-44b3-b175-3c5389d7ef66!
	W1115 11:49:36.178866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:36.186905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:38.190760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:38.198405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:40.207914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:49:40.224225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ee0d081abddf11575d32cec2a52f4a3d14483c7066159cc5a3b99aa279f76238] <==
	I1115 11:48:46.436564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:49:16.611577       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-404149 -n embed-certs-404149
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-404149 -n embed-certs-404149: exit status 2 (615.721847ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-404149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (300.251939ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-126380 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-126380 describe deploy/metrics-server -n kube-system: exit status 1 (101.130024ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-126380 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-126380
helpers_test.go:243: (dbg) docker inspect no-preload-126380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf",
	        "Created": "2025-11-15T11:49:07.318214347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:49:07.393523433Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/hosts",
	        "LogPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf-json.log",
	        "Name": "/no-preload-126380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-126380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-126380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf",
	                "LowerDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-126380",
	                "Source": "/var/lib/docker/volumes/no-preload-126380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-126380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-126380",
	                "name.minikube.sigs.k8s.io": "no-preload-126380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a81a31b94dc6f44babda45b61d76e137dd6c20f3efdca98f16f00d4f97c259e9",
	            "SandboxKey": "/var/run/docker/netns/a81a31b94dc6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-126380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:7a:a0:42:cc:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1b9530ecfade28bc16fd6c10682aa7624f38192683bf3f788bebea9faf0c447",
	                    "EndpointID": "b4f2e39fcbc112a78fe487ea2be7fca78b885f17355ba3bb840d74be611a76e8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-126380",
	                        "0b66713a6755"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-126380 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-126380 logs -n 25: (1.509730132s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-872969                                                                                                                                                                                                                     │ old-k8s-version-872969       │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:45 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:45 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-769461 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:49 UTC │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:49:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:49:46.801757  791960 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:49:46.802385  791960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:46.802421  791960 out.go:374] Setting ErrFile to fd 2...
	I1115 11:49:46.802443  791960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:46.802736  791960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:49:46.803205  791960 out.go:368] Setting JSON to false
	I1115 11:49:46.804203  791960 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12738,"bootTime":1763194649,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:49:46.804301  791960 start.go:143] virtualization:  
	I1115 11:49:46.809819  791960 out.go:179] * [newest-cni-600818] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:49:46.813435  791960 notify.go:221] Checking for updates...
	I1115 11:49:46.813402  791960 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:49:46.817895  791960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:49:46.821033  791960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:49:46.823902  791960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:49:46.825912  791960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:49:46.829314  791960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:49:46.832664  791960 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:46.832761  791960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:49:46.872213  791960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:49:46.872330  791960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:46.966913  791960 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:46.95674444 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:46.967022  791960 docker.go:319] overlay module found
	I1115 11:49:46.970300  791960 out.go:179] * Using the docker driver based on user configuration
	I1115 11:49:46.973178  791960 start.go:309] selected driver: docker
	I1115 11:49:46.973203  791960 start.go:930] validating driver "docker" against <nil>
	I1115 11:49:46.973223  791960 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:49:46.974019  791960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:47.065978  791960 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:47.055906364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:47.066133  791960 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1115 11:49:47.066157  791960 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1115 11:49:47.066374  791960 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 11:49:47.069870  791960 out.go:179] * Using Docker driver with root privileges
	I1115 11:49:47.073045  791960 cni.go:84] Creating CNI manager for ""
	I1115 11:49:47.073115  791960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:47.073128  791960 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:49:47.073221  791960 start.go:353] cluster config:
	{Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:49:47.076497  791960 out.go:179] * Starting "newest-cni-600818" primary control-plane node in "newest-cni-600818" cluster
	I1115 11:49:47.079384  791960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:49:47.082327  791960 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:49:47.085018  791960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:47.085043  791960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:49:47.085072  791960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:49:47.085083  791960 cache.go:65] Caching tarball of preloaded images
	I1115 11:49:47.085160  791960 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:49:47.085169  791960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:49:47.085293  791960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json ...
	I1115 11:49:47.085317  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json: {Name:mk7de3b3a8d810d2120ca1d552d370332a21b889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:47.109969  791960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:49:47.109993  791960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:49:47.110007  791960 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:49:47.110034  791960 start.go:360] acquireMachinesLock for newest-cni-600818: {Name:mkadfb381b8085c410b4f5d50b3173a97fec4ebd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:47.110143  791960 start.go:364] duration metric: took 89.019µs to acquireMachinesLock for "newest-cni-600818"
	I1115 11:49:47.110167  791960 start.go:93] Provisioning new machine with config: &{Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:49:47.110243  791960 start.go:125] createHost starting for "" (driver="docker")
	I1115 11:49:48.120047  787845 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502704159s
	I1115 11:49:48.154135  787845 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:49:48.170821  787845 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:49:48.185288  787845 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:49:48.185499  787845 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-126380 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:49:48.208541  787845 kubeadm.go:319] [bootstrap-token] Using token: wrmliq.1xiul888wuvtqxks
	I1115 11:49:48.211733  787845 out.go:252]   - Configuring RBAC rules ...
	I1115 11:49:48.211859  787845 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 11:49:48.220083  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 11:49:48.233848  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 11:49:48.239669  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 11:49:48.245263  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 11:49:48.250200  787845 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 11:49:48.527960  787845 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 11:49:49.039736  787845 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 11:49:49.526983  787845 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 11:49:49.528427  787845 kubeadm.go:319] 
	I1115 11:49:49.528504  787845 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 11:49:49.528510  787845 kubeadm.go:319] 
	I1115 11:49:49.528591  787845 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 11:49:49.528600  787845 kubeadm.go:319] 
	I1115 11:49:49.528626  787845 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 11:49:49.529130  787845 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 11:49:49.529200  787845 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 11:49:49.529214  787845 kubeadm.go:319] 
	I1115 11:49:49.529272  787845 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 11:49:49.529276  787845 kubeadm.go:319] 
	I1115 11:49:49.529326  787845 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 11:49:49.529331  787845 kubeadm.go:319] 
	I1115 11:49:49.529385  787845 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 11:49:49.529463  787845 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 11:49:49.529535  787845 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 11:49:49.529539  787845 kubeadm.go:319] 
	I1115 11:49:49.529844  787845 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 11:49:49.529933  787845 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 11:49:49.529938  787845 kubeadm.go:319] 
	I1115 11:49:49.530232  787845 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wrmliq.1xiul888wuvtqxks \
	I1115 11:49:49.530347  787845 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 11:49:49.530545  787845 kubeadm.go:319] 	--control-plane 
	I1115 11:49:49.530556  787845 kubeadm.go:319] 
	I1115 11:49:49.530837  787845 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 11:49:49.530847  787845 kubeadm.go:319] 
	I1115 11:49:49.531122  787845 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wrmliq.1xiul888wuvtqxks \
	I1115 11:49:49.531420  787845 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 11:49:49.536492  787845 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 11:49:49.536744  787845 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 11:49:49.536914  787845 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 11:49:49.536943  787845 cni.go:84] Creating CNI manager for ""
	I1115 11:49:49.536951  787845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:49.541543  787845 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 11:49:49.544694  787845 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 11:49:49.549712  787845 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 11:49:49.549780  787845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 11:49:49.570796  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 11:49:50.017006  787845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 11:49:50.017166  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:50.017257  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-126380 minikube.k8s.io/updated_at=2025_11_15T11_49_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=no-preload-126380 minikube.k8s.io/primary=true
	I1115 11:49:50.393141  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:50.393201  787845 ops.go:34] apiserver oom_adj: -16
	I1115 11:49:50.894257  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:47.114529  791960 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:49:47.114823  791960 start.go:159] libmachine.API.Create for "newest-cni-600818" (driver="docker")
	I1115 11:49:47.114866  791960 client.go:173] LocalClient.Create starting
	I1115 11:49:47.114955  791960 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:49:47.114987  791960 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:47.115002  791960 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:47.115053  791960 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:49:47.115071  791960 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:47.115085  791960 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:47.115474  791960 cli_runner.go:164] Run: docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:49:47.138330  791960 cli_runner.go:211] docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:49:47.138413  791960 network_create.go:284] running [docker network inspect newest-cni-600818] to gather additional debugging logs...
	I1115 11:49:47.138431  791960 cli_runner.go:164] Run: docker network inspect newest-cni-600818
	W1115 11:49:47.161799  791960 cli_runner.go:211] docker network inspect newest-cni-600818 returned with exit code 1
	I1115 11:49:47.161836  791960 network_create.go:287] error running [docker network inspect newest-cni-600818]: docker network inspect newest-cni-600818: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-600818 not found
	I1115 11:49:47.161850  791960 network_create.go:289] output of [docker network inspect newest-cni-600818]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-600818 not found
	
	** /stderr **
	I1115 11:49:47.161962  791960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:49:47.191336  791960 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:49:47.191863  791960 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:49:47.192403  791960 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:49:47.192985  791960 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196d8a0}
	I1115 11:49:47.193010  791960 network_create.go:124] attempt to create docker network newest-cni-600818 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 11:49:47.193065  791960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-600818 newest-cni-600818
	I1115 11:49:47.261629  791960 network_create.go:108] docker network newest-cni-600818 192.168.76.0/24 created
	I1115 11:49:47.261658  791960 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-600818" container
	I1115 11:49:47.261729  791960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:49:47.279970  791960 cli_runner.go:164] Run: docker volume create newest-cni-600818 --label name.minikube.sigs.k8s.io=newest-cni-600818 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:49:47.300475  791960 oci.go:103] Successfully created a docker volume newest-cni-600818
	I1115 11:49:47.300553  791960 cli_runner.go:164] Run: docker run --rm --name newest-cni-600818-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-600818 --entrypoint /usr/bin/test -v newest-cni-600818:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:49:47.911996  791960 oci.go:107] Successfully prepared a docker volume newest-cni-600818
	I1115 11:49:47.912067  791960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:47.912077  791960 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 11:49:47.912140  791960 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-600818:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 11:49:51.394101  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:51.893498  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:52.394087  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:52.893253  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:53.393659  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:53.893567  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:54.288687  787845 kubeadm.go:1114] duration metric: took 4.271573473s to wait for elevateKubeSystemPrivileges
	I1115 11:49:54.288715  787845 kubeadm.go:403] duration metric: took 25.131922265s to StartCluster
	I1115 11:49:54.288733  787845 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:54.288793  787845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:49:54.289531  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:54.289751  787845 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:49:54.289834  787845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 11:49:54.290054  787845 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:54.290085  787845 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:49:54.290141  787845 addons.go:70] Setting storage-provisioner=true in profile "no-preload-126380"
	I1115 11:49:54.290155  787845 addons.go:239] Setting addon storage-provisioner=true in "no-preload-126380"
	I1115 11:49:54.290176  787845 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:49:54.290649  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:54.291036  787845 addons.go:70] Setting default-storageclass=true in profile "no-preload-126380"
	I1115 11:49:54.291057  787845 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-126380"
	I1115 11:49:54.291315  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:54.293097  787845 out.go:179] * Verifying Kubernetes components...
	I1115 11:49:54.299816  787845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:49:54.326966  787845 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:54.330992  787845 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:49:54.331015  787845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:49:54.331084  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:54.337815  787845 addons.go:239] Setting addon default-storageclass=true in "no-preload-126380"
	I1115 11:49:54.337856  787845 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:49:54.338262  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:54.366469  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:54.377154  787845 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:49:54.377174  787845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:49:54.377247  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:54.405387  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:54.570701  787845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 11:49:54.612680  787845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:49:54.627529  787845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:49:54.647554  787845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:49:54.960730  787845 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 11:49:54.962538  787845 node_ready.go:35] waiting up to 6m0s for node "no-preload-126380" to be "Ready" ...
	I1115 11:49:55.417759  787845 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 11:49:55.420763  787845 addons.go:515] duration metric: took 1.130654871s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 11:49:55.471490  787845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-126380" context rescaled to 1 replicas
	I1115 11:49:52.809709  791960 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-600818:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.897535012s)
	I1115 11:49:52.809739  791960 kic.go:203] duration metric: took 4.897658376s to extract preloaded images to volume ...
	W1115 11:49:52.809902  791960 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:49:52.810004  791960 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:49:52.903070  791960 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-600818 --name newest-cni-600818 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-600818 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-600818 --network newest-cni-600818 --ip 192.168.76.2 --volume newest-cni-600818:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:49:53.267451  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Running}}
	I1115 11:49:53.295283  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:49:53.325939  791960 cli_runner.go:164] Run: docker exec newest-cni-600818 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:49:53.382416  791960 oci.go:144] the created container "newest-cni-600818" has a running status.
	I1115 11:49:53.382456  791960 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa...
	I1115 11:49:54.016354  791960 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:49:54.042890  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:49:54.082878  791960 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:49:54.082901  791960 kic_runner.go:114] Args: [docker exec --privileged newest-cni-600818 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:49:54.162255  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:49:54.189282  791960 machine.go:94] provisionDockerMachine start ...
	I1115 11:49:54.189374  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:54.214336  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:54.214666  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:54.214681  791960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:49:54.215215  791960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54842->127.0.0.1:33824: read: connection reset by peer
	W1115 11:49:56.966205  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	W1115 11:49:59.466721  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	I1115 11:49:57.380583  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:49:57.380664  791960 ubuntu.go:182] provisioning hostname "newest-cni-600818"
	I1115 11:49:57.380770  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:57.407442  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:57.407761  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:57.407778  791960 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-600818 && echo "newest-cni-600818" | sudo tee /etc/hostname
	I1115 11:49:57.579186  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:49:57.579291  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:57.602553  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:57.602873  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:57.602896  791960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-600818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-600818/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-600818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:49:57.761609  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:49:57.761661  791960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:49:57.761685  791960 ubuntu.go:190] setting up certificates
	I1115 11:49:57.761696  791960 provision.go:84] configureAuth start
	I1115 11:49:57.761773  791960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:49:57.792503  791960 provision.go:143] copyHostCerts
	I1115 11:49:57.792580  791960 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:49:57.792595  791960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:49:57.792668  791960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:49:57.792762  791960 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:49:57.792772  791960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:49:57.792799  791960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:49:57.792852  791960 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:49:57.792965  791960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:49:57.793001  791960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:49:57.793076  791960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.newest-cni-600818 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-600818]
	I1115 11:49:58.427067  791960 provision.go:177] copyRemoteCerts
	I1115 11:49:58.427198  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:49:58.427273  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:58.446723  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:58.553341  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:49:58.571796  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:49:58.597903  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:49:58.618340  791960 provision.go:87] duration metric: took 856.616016ms to configureAuth
	I1115 11:49:58.618407  791960 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:49:58.618617  791960 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:58.618770  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:58.637559  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:58.637953  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:58.637973  791960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:49:58.934751  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:49:58.934817  791960 machine.go:97] duration metric: took 4.745512604s to provisionDockerMachine
	I1115 11:49:58.934840  791960 client.go:176] duration metric: took 11.819967418s to LocalClient.Create
	I1115 11:49:58.934874  791960 start.go:167] duration metric: took 11.820053294s to libmachine.API.Create "newest-cni-600818"
	I1115 11:49:58.934895  791960 start.go:293] postStartSetup for "newest-cni-600818" (driver="docker")
	I1115 11:49:58.934919  791960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:49:58.935000  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:49:58.935077  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:58.954566  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.065936  791960 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:49:59.069773  791960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:49:59.069818  791960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:49:59.069830  791960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:49:59.069893  791960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:49:59.069993  791960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:49:59.070099  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:49:59.078650  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:49:59.096177  791960 start.go:296] duration metric: took 161.254344ms for postStartSetup
	I1115 11:49:59.096550  791960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:49:59.114717  791960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json ...
	I1115 11:49:59.115018  791960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:49:59.115070  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:59.135596  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.244729  791960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:49:59.249644  791960 start.go:128] duration metric: took 12.139386047s to createHost
	I1115 11:49:59.249666  791960 start.go:83] releasing machines lock for "newest-cni-600818", held for 12.139514795s
	I1115 11:49:59.249736  791960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:49:59.273490  791960 ssh_runner.go:195] Run: cat /version.json
	I1115 11:49:59.273551  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:59.273778  791960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:49:59.273841  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:59.300748  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.325339  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.408538  791960 ssh_runner.go:195] Run: systemctl --version
	I1115 11:49:59.509635  791960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:49:59.547425  791960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:49:59.552680  791960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:49:59.552757  791960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:49:59.582761  791960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:49:59.582797  791960 start.go:496] detecting cgroup driver to use...
	I1115 11:49:59.582830  791960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:49:59.582883  791960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:49:59.602735  791960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:49:59.616030  791960 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:49:59.616150  791960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:49:59.634324  791960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:49:59.652385  791960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:49:59.788158  791960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:49:59.915687  791960 docker.go:234] disabling docker service ...
	I1115 11:49:59.915767  791960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:49:59.937309  791960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:49:59.950901  791960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:50:00.220648  791960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:50:00.527980  791960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:50:00.550711  791960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:50:00.582026  791960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:50:00.582104  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.594412  791960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:50:00.594542  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.606808  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.617986  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.631266  791960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:50:00.642683  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.654360  791960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.672604  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.683717  791960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:50:00.693618  791960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:50:00.703953  791960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:00.840808  791960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:50:01.129015  791960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:50:01.129107  791960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:50:01.133399  791960 start.go:564] Will wait 60s for crictl version
	I1115 11:50:01.133465  791960 ssh_runner.go:195] Run: which crictl
	I1115 11:50:01.137593  791960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:50:01.171478  791960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:50:01.171616  791960 ssh_runner.go:195] Run: crio --version
	I1115 11:50:01.203073  791960 ssh_runner.go:195] Run: crio --version
	I1115 11:50:01.242602  791960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:50:01.245507  791960 cli_runner.go:164] Run: docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:50:01.266013  791960 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:50:01.271310  791960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:01.286397  791960 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 11:50:01.289354  791960 kubeadm.go:884] updating cluster {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:50:01.289562  791960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:01.289668  791960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:01.325933  791960 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:01.325957  791960 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:50:01.326050  791960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:01.356342  791960 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:01.356366  791960 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:50:01.356375  791960 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:50:01.356517  791960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-600818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:50:01.356630  791960 ssh_runner.go:195] Run: crio config
	I1115 11:50:01.428703  791960 cni.go:84] Creating CNI manager for ""
	I1115 11:50:01.428731  791960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:01.428748  791960 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 11:50:01.428793  791960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-600818 NodeName:newest-cni-600818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:50:01.429079  791960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-600818"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:50:01.429180  791960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:50:01.438227  791960 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:50:01.438303  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:50:01.452688  791960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:50:01.468378  791960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:50:01.482279  791960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1115 11:50:01.496724  791960 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:50:01.500792  791960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:01.512054  791960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:01.630116  791960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:01.647239  791960 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818 for IP: 192.168.76.2
	I1115 11:50:01.647260  791960 certs.go:195] generating shared ca certs ...
	I1115 11:50:01.647277  791960 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:01.647425  791960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:50:01.647476  791960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:50:01.647487  791960 certs.go:257] generating profile certs ...
	I1115 11:50:01.647555  791960 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key
	I1115 11:50:01.647574  791960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.crt with IP's: []
	I1115 11:50:01.919017  791960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.crt ...
	I1115 11:50:01.919050  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.crt: {Name:mk84da1a564d90a292e833d8d7f924ee29584c8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:01.919328  791960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key ...
	I1115 11:50:01.919343  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key: {Name:mk5a8aee48f197cf42f8f8a6d14ba2e1baa11bc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:01.919439  791960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42
	I1115 11:50:01.919455  791960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 11:50:02.940383  791960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42 ...
	I1115 11:50:02.940414  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42: {Name:mk09ada8da635957818d702e0257f698e34f4b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:02.940599  791960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42 ...
	I1115 11:50:02.940613  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42: {Name:mk3be3027fd498f5c96a9fe43585aeb99ea2dc6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:02.940705  791960 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt
	I1115 11:50:02.940791  791960 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key
	I1115 11:50:02.940875  791960 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key
	I1115 11:50:02.940893  791960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt with IP's: []
	I1115 11:50:03.747651  791960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt ...
	I1115 11:50:03.747685  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt: {Name:mk91a17b2444f6eb3a03908e0dd6639d785e5cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:03.747905  791960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key ...
	I1115 11:50:03.747921  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key: {Name:mk39c1a57ff08354049bdb83c2052ef06cf4d0f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:03.748110  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:50:03.748159  791960 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:50:03.748171  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:50:03.748197  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:50:03.748232  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:50:03.748257  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:50:03.748304  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:03.748896  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:50:03.767572  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:50:03.786531  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:50:03.807360  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:50:03.828709  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:50:03.848181  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:50:03.867570  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:50:03.884673  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 11:50:03.903301  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:50:03.924346  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:50:03.945059  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:50:03.970961  791960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:50:03.987349  791960 ssh_runner.go:195] Run: openssl version
	I1115 11:50:03.994201  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:50:04.005680  791960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:50:04.011050  791960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:50:04.011169  791960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:50:04.053168  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:50:04.061954  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:50:04.070605  791960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:50:04.074630  791960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:50:04.074707  791960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:50:04.117878  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:50:04.126371  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:50:04.135167  791960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:04.139401  791960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:04.139475  791960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:04.180969  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:50:04.189784  791960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:50:04.193276  791960 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:50:04.193328  791960 kubeadm.go:401] StartCluster: {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:04.193419  791960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:50:04.193473  791960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:50:04.228953  791960 cri.go:89] found id: ""
	I1115 11:50:04.229034  791960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:50:04.243274  791960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:50:04.251097  791960 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:50:04.251212  791960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:50:04.261797  791960 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:50:04.261816  791960 kubeadm.go:158] found existing configuration files:
	
	I1115 11:50:04.261894  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:50:04.271250  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:50:04.271333  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:50:04.279073  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:50:04.287572  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:50:04.287637  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:50:04.295445  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:50:04.303281  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:50:04.303382  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:50:04.310828  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:50:04.320161  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:50:04.320290  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:50:04.328408  791960 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:50:04.374656  791960 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 11:50:04.375000  791960 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:50:04.398165  791960 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:50:04.398245  791960 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:50:04.398286  791960 kubeadm.go:319] OS: Linux
	I1115 11:50:04.398339  791960 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:50:04.398399  791960 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:50:04.398454  791960 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:50:04.398508  791960 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:50:04.398563  791960 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:50:04.398615  791960 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:50:04.398667  791960 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:50:04.398721  791960 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:50:04.398774  791960 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:50:04.473553  791960 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:50:04.473672  791960 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:50:04.473772  791960 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 11:50:04.481196  791960 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 11:50:01.968059  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	W1115 11:50:04.466262  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	I1115 11:50:04.487266  791960 out.go:252]   - Generating certificates and keys ...
	I1115 11:50:04.487412  791960 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:50:04.487498  791960 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:50:05.734767  791960 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:50:06.618928  791960 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1115 11:50:06.966188  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	I1115 11:50:09.466742  787845 node_ready.go:49] node "no-preload-126380" is "Ready"
	I1115 11:50:09.466766  787845 node_ready.go:38] duration metric: took 14.504197769s for node "no-preload-126380" to be "Ready" ...
	I1115 11:50:09.466779  787845 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:50:09.466840  787845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:50:09.481539  787845 api_server.go:72] duration metric: took 15.191757012s to wait for apiserver process to appear ...
	I1115 11:50:09.481560  787845 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:50:09.481580  787845 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:50:09.490856  787845 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 11:50:09.492061  787845 api_server.go:141] control plane version: v1.34.1
	I1115 11:50:09.492083  787845 api_server.go:131] duration metric: took 10.515737ms to wait for apiserver health ...
	I1115 11:50:09.492091  787845 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:50:09.506521  787845 system_pods.go:59] 8 kube-system pods found
	I1115 11:50:09.506555  787845 system_pods.go:61] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:09.506562  787845 system_pods.go:61] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:09.506568  787845 system_pods.go:61] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:09.506572  787845 system_pods.go:61] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:09.506577  787845 system_pods.go:61] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:09.506581  787845 system_pods.go:61] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:09.506585  787845 system_pods.go:61] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:09.506593  787845 system_pods.go:61] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:09.506601  787845 system_pods.go:74] duration metric: took 14.502767ms to wait for pod list to return data ...
	I1115 11:50:09.506609  787845 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:50:09.514282  787845 default_sa.go:45] found service account: "default"
	I1115 11:50:09.514306  787845 default_sa.go:55] duration metric: took 7.690929ms for default service account to be created ...
	I1115 11:50:09.514316  787845 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:50:09.522872  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:09.522957  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:09.522981  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:09.523000  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:09.523031  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:09.523055  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:09.523074  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:09.523093  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:09.523129  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:09.523164  787845 retry.go:31] will retry after 297.961286ms: missing components: kube-dns
	I1115 11:50:09.825020  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:09.825072  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:09.825081  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:09.825090  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:09.825095  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:09.825104  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:09.825108  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:09.825111  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:09.825119  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:09.825134  787845 retry.go:31] will retry after 280.658865ms: missing components: kube-dns
	I1115 11:50:10.110813  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:10.110846  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:10.110853  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:10.110859  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:10.110864  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:10.110869  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:10.110874  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:10.110879  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:10.110884  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:10.110899  787845 retry.go:31] will retry after 382.962418ms: missing components: kube-dns
	I1115 11:50:10.498594  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:10.498621  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Running
	I1115 11:50:10.498628  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:10.498632  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:10.498636  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:10.498641  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:10.498644  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:10.498648  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:10.498652  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Running
	I1115 11:50:10.498659  787845 system_pods.go:126] duration metric: took 984.336962ms to wait for k8s-apps to be running ...
	I1115 11:50:10.498666  787845 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:50:10.498723  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:50:10.515426  787845 system_svc.go:56] duration metric: took 16.748935ms WaitForService to wait for kubelet
	I1115 11:50:10.515503  787845 kubeadm.go:587] duration metric: took 16.225726318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:50:10.515539  787845 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:50:10.518944  787845 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:50:10.519019  787845 node_conditions.go:123] node cpu capacity is 2
	I1115 11:50:10.519046  787845 node_conditions.go:105] duration metric: took 3.485149ms to run NodePressure ...
	I1115 11:50:10.519070  787845 start.go:242] waiting for startup goroutines ...
	I1115 11:50:10.519104  787845 start.go:247] waiting for cluster config update ...
	I1115 11:50:10.519131  787845 start.go:256] writing updated cluster config ...
	I1115 11:50:10.519485  787845 ssh_runner.go:195] Run: rm -f paused
	I1115 11:50:10.523693  787845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:50:10.527731  787845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m2hwn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.534046  787845 pod_ready.go:94] pod "coredns-66bc5c9577-m2hwn" is "Ready"
	I1115 11:50:10.534073  787845 pod_ready.go:86] duration metric: took 6.314914ms for pod "coredns-66bc5c9577-m2hwn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.537041  787845 pod_ready.go:83] waiting for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.543341  787845 pod_ready.go:94] pod "etcd-no-preload-126380" is "Ready"
	I1115 11:50:10.543367  787845 pod_ready.go:86] duration metric: took 6.297273ms for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.546430  787845 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.552249  787845 pod_ready.go:94] pod "kube-apiserver-no-preload-126380" is "Ready"
	I1115 11:50:10.552285  787845 pod_ready.go:86] duration metric: took 5.829272ms for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.555177  787845 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.929099  787845 pod_ready.go:94] pod "kube-controller-manager-no-preload-126380" is "Ready"
	I1115 11:50:10.929132  787845 pod_ready.go:86] duration metric: took 373.931216ms for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:11.129453  787845 pod_ready.go:83] waiting for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:06.843515  791960 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 11:50:07.600828  791960 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:50:07.922735  791960 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:50:07.923119  791960 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-600818] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:50:08.022142  791960 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:50:08.022609  791960 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-600818] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:50:09.209743  791960 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:50:09.439496  791960 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:50:10.113383  791960 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:50:10.114035  791960 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:50:10.359996  791960 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:50:10.683138  791960 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 11:50:11.635954  791960 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:50:11.800756  791960 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:50:11.528158  787845 pod_ready.go:94] pod "kube-proxy-zhsz4" is "Ready"
	I1115 11:50:11.528202  787845 pod_ready.go:86] duration metric: took 398.719054ms for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:11.729211  787845 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:12.129488  787845 pod_ready.go:94] pod "kube-scheduler-no-preload-126380" is "Ready"
	I1115 11:50:12.129539  787845 pod_ready.go:86] duration metric: took 400.243484ms for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:12.129584  787845 pod_ready.go:40] duration metric: took 1.605828488s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:50:12.223277  787845 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:50:12.226822  787845 out.go:179] * Done! kubectl is now configured to use "no-preload-126380" cluster and "default" namespace by default
	I1115 11:50:12.229001  791960 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:50:12.265543  791960 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:50:12.265653  791960 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 11:50:12.272029  791960 out.go:252]   - Booting up control plane ...
	I1115 11:50:12.272194  791960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 11:50:12.272304  791960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 11:50:12.272436  791960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 11:50:12.329703  791960 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 11:50:12.329863  791960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 11:50:12.342188  791960 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 11:50:12.342341  791960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 11:50:12.342390  791960 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 11:50:12.542068  791960 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 11:50:12.542201  791960 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 11:50:15.050037  791960 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.50310232s
	I1115 11:50:15.050154  791960 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 11:50:15.050240  791960 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 11:50:15.050333  791960 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 11:50:15.050414  791960 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 11:50:16.997773  791960 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.948594519s
	I1115 11:50:19.282392  791960 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.23374255s
	I1115 11:50:21.551916  791960 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503005909s
	I1115 11:50:21.582945  791960 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:50:21.600758  791960 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:50:21.617263  791960 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:50:21.617501  791960 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-600818 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:50:21.634848  791960 kubeadm.go:319] [bootstrap-token] Using token: bqu6v3.1anpaql43qcy8ikq
	
	
	==> CRI-O <==
	Nov 15 11:50:09 no-preload-126380 crio[838]: time="2025-11-15T11:50:09.69907477Z" level=info msg="Created container 3a337ed4dc74bc2479ab9ba398b0fdd6c30e29806be5577555ad30066bfd4c55: kube-system/coredns-66bc5c9577-m2hwn/coredns" id=661eaac6-80cd-4352-b285-ded417c4527c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:09 no-preload-126380 crio[838]: time="2025-11-15T11:50:09.700181572Z" level=info msg="Starting container: 3a337ed4dc74bc2479ab9ba398b0fdd6c30e29806be5577555ad30066bfd4c55" id=067322b0-d311-49cf-862d-08d0a290e418 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:09 no-preload-126380 crio[838]: time="2025-11-15T11:50:09.706221685Z" level=info msg="Started container" PID=2486 containerID=3a337ed4dc74bc2479ab9ba398b0fdd6c30e29806be5577555ad30066bfd4c55 description=kube-system/coredns-66bc5c9577-m2hwn/coredns id=067322b0-d311-49cf-862d-08d0a290e418 name=/runtime.v1.RuntimeService/StartContainer sandboxID=135760accfedab948196a96396f60858dcddc84bb687e93efc03f9adfdad627d
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.842528614Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ece508d6-1f7b-4ac1-b2cb-6e3b4c6ef9fb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.84259707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.857545527Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:222ac886896dbde8a928dd67965bbfe5c2e5135c968a5cfc65592dbb3225ae4c UID:12ccf240-d78b-47c9-923c-0c9e8a54f8d0 NetNS:/var/run/netns/85d21657-7a21-40e0-a286-19caff566506 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078fb0}] Aliases:map[]}"
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.857586882Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.868778286Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:222ac886896dbde8a928dd67965bbfe5c2e5135c968a5cfc65592dbb3225ae4c UID:12ccf240-d78b-47c9-923c-0c9e8a54f8d0 NetNS:/var/run/netns/85d21657-7a21-40e0-a286-19caff566506 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078fb0}] Aliases:map[]}"
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.868969508Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.872930043Z" level=info msg="Ran pod sandbox 222ac886896dbde8a928dd67965bbfe5c2e5135c968a5cfc65592dbb3225ae4c with infra container: default/busybox/POD" id=ece508d6-1f7b-4ac1-b2cb-6e3b4c6ef9fb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.874124911Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=21542e33-eb18-4ea0-b6a3-450284bbeea0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.874291444Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=21542e33-eb18-4ea0-b6a3-450284bbeea0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.87436273Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=21542e33-eb18-4ea0-b6a3-450284bbeea0 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.875045495Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9d59cbcf-f76a-4650-af4b-a2ddbce1e2ce name=/runtime.v1.ImageService/PullImage
	Nov 15 11:50:12 no-preload-126380 crio[838]: time="2025-11-15T11:50:12.885766773Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.162013612Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9d59cbcf-f76a-4650-af4b-a2ddbce1e2ce name=/runtime.v1.ImageService/PullImage
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.162687424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a8f9088-ee52-4598-a88f-1c61f8eb56ea name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.165883191Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c736282d-46a3-45c1-b061-1186190a5cbc name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.171583773Z" level=info msg="Creating container: default/busybox/busybox" id=d10dc3fd-d3f1-410e-8301-e2daa3498834 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.171697201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.176849297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.177438374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.193318598Z" level=info msg="Created container b0b6ba4f1287c956577599ee17fa719fa0689142c286e5e1efb5339b84c6b21e: default/busybox/busybox" id=d10dc3fd-d3f1-410e-8301-e2daa3498834 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.194350355Z" level=info msg="Starting container: b0b6ba4f1287c956577599ee17fa719fa0689142c286e5e1efb5339b84c6b21e" id=b61aabb7-13da-45d5-b062-96b7fae63571 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:15 no-preload-126380 crio[838]: time="2025-11-15T11:50:15.196456683Z" level=info msg="Started container" PID=2539 containerID=b0b6ba4f1287c956577599ee17fa719fa0689142c286e5e1efb5339b84c6b21e description=default/busybox/busybox id=b61aabb7-13da-45d5-b062-96b7fae63571 name=/runtime.v1.RuntimeService/StartContainer sandboxID=222ac886896dbde8a928dd67965bbfe5c2e5135c968a5cfc65592dbb3225ae4c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b0b6ba4f1287c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   222ac886896db       busybox                                     default
	3a337ed4dc74b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   135760accfeda       coredns-66bc5c9577-m2hwn                    kube-system
	422a7b9481dd9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   e97dbad8b2c0c       storage-provisioner                         kube-system
	71d415a63b41b       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   b0e05edc5f62b       kindnet-7vrr2                               kube-system
	49b9b9ebfa2f5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      26 seconds ago      Running             kube-proxy                0                   e94ae8798f6b8       kube-proxy-zhsz4                            kube-system
	a09cb8a1dce5e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      42 seconds ago      Running             kube-controller-manager   0                   7c94e1e08579e       kube-controller-manager-no-preload-126380   kube-system
	2be5b297a1937       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      42 seconds ago      Running             kube-apiserver            0                   6719dcd4b3e66       kube-apiserver-no-preload-126380            kube-system
	4460893a3f85f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      42 seconds ago      Running             kube-scheduler            0                   50c65f0c62558       kube-scheduler-no-preload-126380            kube-system
	acee64babada1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      42 seconds ago      Running             etcd                      0                   2e1a26418e897       etcd-no-preload-126380                      kube-system
	
	
	==> coredns [3a337ed4dc74bc2479ab9ba398b0fdd6c30e29806be5577555ad30066bfd4c55] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49811 - 4976 "HINFO IN 897719329829476134.7090630111296661492. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025048166s
	
	
	==> describe nodes <==
	Name:               no-preload-126380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-126380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=no-preload-126380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_49_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:49:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-126380
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:50:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:50:19 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:50:19 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:50:19 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:50:19 +0000   Sat, 15 Nov 2025 11:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-126380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                a22ae12e-ce80-4a2c-98ad-3a3e8aeb26aa
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-m2hwn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-126380                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-7vrr2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-126380             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-126380    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-zhsz4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-126380             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-126380 event: Registered Node no-preload-126380 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-126380 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	[Nov15 11:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [acee64babada1f14ac154907313f8585bc45160ea908504d3fe4ee9933579a6c] <==
	{"level":"warn","ts":"2025-11-15T11:49:43.181286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.252479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.316304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.371261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.404547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.439569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.479673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.563534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.571520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.661068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.720540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.777063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.816436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.846988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.917482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.952846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:43.983510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.011832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.035097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.073414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.096834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.164940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.223658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.277151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:49:44.394288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60044","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:50:22 up  3:32,  0 user,  load average: 3.48, 3.35, 2.91
	Linux no-preload-126380 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [71d415a63b41beae6225d8d4ea03c041d1409569510d0b006a9122787104e2ca] <==
	I1115 11:49:58.312718       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:49:58.390085       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:49:58.390289       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:49:58.390358       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:49:58.390396       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:49:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:49:58.591350       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:49:58.591509       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:49:58.591580       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:49:58.592134       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 11:49:58.891891       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:49:58.891983       1 metrics.go:72] Registering metrics
	I1115 11:49:58.892079       1 controller.go:711] "Syncing nftables rules"
	I1115 11:50:08.597006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:50:08.597153       1 main.go:301] handling current node
	I1115 11:50:18.592926       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:50:18.592986       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2be5b297a193774971741d21ccc1be10f27b52824eaa05eecfc0ac8d51dd26fd] <==
	E1115 11:49:46.019411       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1115 11:49:46.039224       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:49:46.052586       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:49:46.053234       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1115 11:49:46.056796       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1115 11:49:46.074306       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:49:46.082705       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:49:46.275705       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:49:46.445416       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 11:49:46.474407       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 11:49:46.474448       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:49:47.546570       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:49:47.627719       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:49:47.750084       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 11:49:47.762119       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 11:49:47.763260       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:49:47.768810       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:49:47.790998       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:49:48.995097       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:49:49.038052       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 11:49:49.056057       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 11:49:53.653894       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:49:53.830124       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 11:49:53.935054       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:49:53.978330       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a09cb8a1dce5e6f7a63aead597f4f989ee6f03a115e6f8d24375c62893573a91] <==
	I1115 11:49:52.820962       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-126380" podCIDRs=["10.244.0.0/24"]
	I1115 11:49:52.823481       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:49:52.829656       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:49:52.830486       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:49:52.833572       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:49:52.841035       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 11:49:52.841760       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 11:49:52.841820       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:49:52.843216       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 11:49:52.843277       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 11:49:52.843343       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:49:52.847515       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 11:49:52.847621       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:49:52.850369       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 11:49:52.850408       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:49:52.850475       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 11:49:52.850662       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 11:49:52.851016       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-126380"
	I1115 11:49:52.851075       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:49:52.852980       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:49:52.856033       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:49:52.859223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 11:49:52.866513       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 11:49:52.867832       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 11:50:12.857497       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [49b9b9ebfa2f56228b25ff7d33fe421c0b18aae8fb9d78df5e41724b70f5d5f6] <==
	I1115 11:49:55.560882       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:49:55.654519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:49:55.755526       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:49:55.755657       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:49:55.755764       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:49:55.786752       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:49:55.786875       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:49:55.793086       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:49:55.793738       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:49:55.793763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:49:55.795172       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:49:55.795217       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:49:55.795293       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:49:55.801234       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:49:55.795935       1 config.go:309] "Starting node config controller"
	I1115 11:49:55.801321       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:49:55.801357       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:49:55.796465       1 config.go:200] "Starting service config controller"
	I1115 11:49:55.801403       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:49:55.901164       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 11:49:55.902357       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:49:55.902934       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4460893a3f85f0be1f3ebbba72010e4c28de8c7dc6b0316c357148b5a1f77e41] <==
	E1115 11:49:45.942417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:49:45.942523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:49:45.942602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:49:45.942674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:49:45.942749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:49:45.943699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:49:45.943780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:49:45.943977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:49:45.944118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:49:45.947356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:49:46.753585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:49:46.780962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:49:46.788616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:49:46.795380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:49:46.798662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:49:46.842579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:49:46.852895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:49:46.888091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:49:47.005427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:49:47.045867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:49:47.062513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:49:47.120831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:49:47.135393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:49:47.520916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1115 11:49:49.615504       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:49:52 no-preload-126380 kubelet[1995]: I1115 11:49:52.869339    1995 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.116063    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgffk\" (UniqueName: \"kubernetes.io/projected/c5da489a-d25e-49a1-95b9-c868981a97e8-kube-api-access-kgffk\") pod \"kindnet-7vrr2\" (UID: \"c5da489a-d25e-49a1-95b9-c868981a97e8\") " pod="kube-system/kindnet-7vrr2"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.117847    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c5da489a-d25e-49a1-95b9-c868981a97e8-cni-cfg\") pod \"kindnet-7vrr2\" (UID: \"c5da489a-d25e-49a1-95b9-c868981a97e8\") " pod="kube-system/kindnet-7vrr2"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.118127    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5da489a-d25e-49a1-95b9-c868981a97e8-xtables-lock\") pod \"kindnet-7vrr2\" (UID: \"c5da489a-d25e-49a1-95b9-c868981a97e8\") " pod="kube-system/kindnet-7vrr2"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.118274    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64878ec8-f351-4aa1-b2a9-7a6b5c705fcd-kube-proxy\") pod \"kube-proxy-zhsz4\" (UID: \"64878ec8-f351-4aa1-b2a9-7a6b5c705fcd\") " pod="kube-system/kube-proxy-zhsz4"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.118496    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5da489a-d25e-49a1-95b9-c868981a97e8-lib-modules\") pod \"kindnet-7vrr2\" (UID: \"c5da489a-d25e-49a1-95b9-c868981a97e8\") " pod="kube-system/kindnet-7vrr2"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.118838    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvs8\" (UniqueName: \"kubernetes.io/projected/64878ec8-f351-4aa1-b2a9-7a6b5c705fcd-kube-api-access-vbvs8\") pod \"kube-proxy-zhsz4\" (UID: \"64878ec8-f351-4aa1-b2a9-7a6b5c705fcd\") " pod="kube-system/kube-proxy-zhsz4"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.119078    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64878ec8-f351-4aa1-b2a9-7a6b5c705fcd-xtables-lock\") pod \"kube-proxy-zhsz4\" (UID: \"64878ec8-f351-4aa1-b2a9-7a6b5c705fcd\") " pod="kube-system/kube-proxy-zhsz4"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: I1115 11:49:54.119196    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64878ec8-f351-4aa1-b2a9-7a6b5c705fcd-lib-modules\") pod \"kube-proxy-zhsz4\" (UID: \"64878ec8-f351-4aa1-b2a9-7a6b5c705fcd\") " pod="kube-system/kube-proxy-zhsz4"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: E1115 11:49:54.134643    1995 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-126380\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-126380' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 15 11:49:54 no-preload-126380 kubelet[1995]: E1115 11:49:54.134843    1995 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7vrr2\" is forbidden: User \"system:node:no-preload-126380\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-126380' and this object" podUID="c5da489a-d25e-49a1-95b9-c868981a97e8" pod="kube-system/kindnet-7vrr2"
	Nov 15 11:49:55 no-preload-126380 kubelet[1995]: I1115 11:49:55.055034    1995 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:49:55 no-preload-126380 kubelet[1995]: W1115 11:49:55.404730    1995 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/crio-e94ae8798f6b8e9f52226ef4e883bb98c28bbcf017f71d05f1d59826b4745226 WatchSource:0}: Error finding container e94ae8798f6b8e9f52226ef4e883bb98c28bbcf017f71d05f1d59826b4745226: Status 404 returned error can't find the container with id e94ae8798f6b8e9f52226ef4e883bb98c28bbcf017f71d05f1d59826b4745226
	Nov 15 11:49:56 no-preload-126380 kubelet[1995]: I1115 11:49:56.289116    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhsz4" podStartSLOduration=3.289096507 podStartE2EDuration="3.289096507s" podCreationTimestamp="2025-11-15 11:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:49:56.266050651 +0000 UTC m=+7.452608560" watchObservedRunningTime="2025-11-15 11:49:56.289096507 +0000 UTC m=+7.475654408"
	Nov 15 11:50:09 no-preload-126380 kubelet[1995]: I1115 11:50:09.129075    1995 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 11:50:09 no-preload-126380 kubelet[1995]: I1115 11:50:09.182748    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7vrr2" podStartSLOduration=13.374884957 podStartE2EDuration="16.182730152s" podCreationTimestamp="2025-11-15 11:49:53 +0000 UTC" firstStartedPulling="2025-11-15 11:49:55.369272232 +0000 UTC m=+6.555830133" lastFinishedPulling="2025-11-15 11:49:58.177117427 +0000 UTC m=+9.363675328" observedRunningTime="2025-11-15 11:49:58.28574339 +0000 UTC m=+9.472301324" watchObservedRunningTime="2025-11-15 11:50:09.182730152 +0000 UTC m=+20.369288069"
	Nov 15 11:50:09 no-preload-126380 kubelet[1995]: I1115 11:50:09.275574    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff6e9c80-26d2-46ef-8778-38324bb83386-config-volume\") pod \"coredns-66bc5c9577-m2hwn\" (UID: \"ff6e9c80-26d2-46ef-8778-38324bb83386\") " pod="kube-system/coredns-66bc5c9577-m2hwn"
	Nov 15 11:50:09 no-preload-126380 kubelet[1995]: I1115 11:50:09.275644    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pzwb\" (UniqueName: \"kubernetes.io/projected/ff6e9c80-26d2-46ef-8778-38324bb83386-kube-api-access-2pzwb\") pod \"coredns-66bc5c9577-m2hwn\" (UID: \"ff6e9c80-26d2-46ef-8778-38324bb83386\") " pod="kube-system/coredns-66bc5c9577-m2hwn"
	Nov 15 11:50:09 no-preload-126380 kubelet[1995]: I1115 11:50:09.275681    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/31e6610d-bf36-4446-8c1f-c0d4cd2563e6-tmp\") pod \"storage-provisioner\" (UID: \"31e6610d-bf36-4446-8c1f-c0d4cd2563e6\") " pod="kube-system/storage-provisioner"
	Nov 15 11:50:09 no-preload-126380 kubelet[1995]: I1115 11:50:09.275762    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5tl9\" (UniqueName: \"kubernetes.io/projected/31e6610d-bf36-4446-8c1f-c0d4cd2563e6-kube-api-access-k5tl9\") pod \"storage-provisioner\" (UID: \"31e6610d-bf36-4446-8c1f-c0d4cd2563e6\") " pod="kube-system/storage-provisioner"
	Nov 15 11:50:09 no-preload-126380 kubelet[1995]: W1115 11:50:09.628455    1995 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/crio-135760accfedab948196a96396f60858dcddc84bb687e93efc03f9adfdad627d WatchSource:0}: Error finding container 135760accfedab948196a96396f60858dcddc84bb687e93efc03f9adfdad627d: Status 404 returned error can't find the container with id 135760accfedab948196a96396f60858dcddc84bb687e93efc03f9adfdad627d
	Nov 15 11:50:10 no-preload-126380 kubelet[1995]: I1115 11:50:10.338107    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m2hwn" podStartSLOduration=16.338087562 podStartE2EDuration="16.338087562s" podCreationTimestamp="2025-11-15 11:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:50:10.317060091 +0000 UTC m=+21.503617992" watchObservedRunningTime="2025-11-15 11:50:10.338087562 +0000 UTC m=+21.524645463"
	Nov 15 11:50:10 no-preload-126380 kubelet[1995]: I1115 11:50:10.357694    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.357675232 podStartE2EDuration="15.357675232s" podCreationTimestamp="2025-11-15 11:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:50:10.339440939 +0000 UTC m=+21.525998864" watchObservedRunningTime="2025-11-15 11:50:10.357675232 +0000 UTC m=+21.544233141"
	Nov 15 11:50:12 no-preload-126380 kubelet[1995]: I1115 11:50:12.609956    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbwl9\" (UniqueName: \"kubernetes.io/projected/12ccf240-d78b-47c9-923c-0c9e8a54f8d0-kube-api-access-kbwl9\") pod \"busybox\" (UID: \"12ccf240-d78b-47c9-923c-0c9e8a54f8d0\") " pod="default/busybox"
	Nov 15 11:50:12 no-preload-126380 kubelet[1995]: W1115 11:50:12.870885    1995 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/crio-222ac886896dbde8a928dd67965bbfe5c2e5135c968a5cfc65592dbb3225ae4c WatchSource:0}: Error finding container 222ac886896dbde8a928dd67965bbfe5c2e5135c968a5cfc65592dbb3225ae4c: Status 404 returned error can't find the container with id 222ac886896dbde8a928dd67965bbfe5c2e5135c968a5cfc65592dbb3225ae4c
	
	
	==> storage-provisioner [422a7b9481dd9b41ab6c34871cf016cbcced3af24cd5bb69d26221a625a0a8b2] <==
	I1115 11:50:09.648580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:50:09.670225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:50:09.670359       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 11:50:09.673324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:09.680546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:50:09.680802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:50:09.681028       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1a6883d-ab3f-4fde-8358-8e509502c15b", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-126380_07aca235-04c9-4d3e-ac93-898149f1274c became leader
	I1115 11:50:09.684303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-126380_07aca235-04c9-4d3e-ac93-898149f1274c!
	W1115 11:50:09.716656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:09.723883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:50:09.785573       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-126380_07aca235-04c9-4d3e-ac93-898149f1274c!
	W1115 11:50:11.729618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:11.737899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:13.741860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:13.749424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:15.752447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:15.759535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:17.762603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:17.767110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:19.772331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:19.776849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:21.779874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:50:21.785064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-126380 -n no-preload-126380
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-126380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (297.362523ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-600818
helpers_test.go:243: (dbg) docker inspect newest-cni-600818:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b",
	        "Created": "2025-11-15T11:49:52.920740445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 792453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:49:52.993971146Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/hostname",
	        "HostsPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/hosts",
	        "LogPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b-json.log",
	        "Name": "/newest-cni-600818",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-600818:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-600818",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b",
	                "LowerDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-600818",
	                "Source": "/var/lib/docker/volumes/newest-cni-600818/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-600818",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-600818",
	                "name.minikube.sigs.k8s.io": "newest-cni-600818",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e0b67b1a35e26865a89e8cdb695fb4323de87b76dbfbb60133b1c6e26d51ffa",
	            "SandboxKey": "/var/run/docker/netns/3e0b67b1a35e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-600818": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:39:1d:39:15:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3cd7ce9096f133c92aef6a7dc4fc2b918e8e85d34f96edb6bcf65eb55bcdc15",
	                    "EndpointID": "f209d82c2bcef9f334ff1559531b0dbce39b8eacb932e36075dc508f932575c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-600818",
	                        "533b7ee97cf4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-600818 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-600818 logs -n 25: (1.067985484s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ delete  │ -p cert-expiration-636406                                                                                                                                                                                                                     │ cert-expiration-636406       │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:46 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:46 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-769461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-769461 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:47 UTC │
	│ start   │ -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:47 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:49 UTC │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p no-preload-126380 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:49:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:49:46.801757  791960 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:49:46.802385  791960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:46.802421  791960 out.go:374] Setting ErrFile to fd 2...
	I1115 11:49:46.802443  791960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:49:46.802736  791960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:49:46.803205  791960 out.go:368] Setting JSON to false
	I1115 11:49:46.804203  791960 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12738,"bootTime":1763194649,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:49:46.804301  791960 start.go:143] virtualization:  
	I1115 11:49:46.809819  791960 out.go:179] * [newest-cni-600818] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:49:46.813435  791960 notify.go:221] Checking for updates...
	I1115 11:49:46.813402  791960 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:49:46.817895  791960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:49:46.821033  791960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:49:46.823902  791960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:49:46.825912  791960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:49:46.829314  791960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:49:46.832664  791960 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:46.832761  791960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:49:46.872213  791960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:49:46.872330  791960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:46.966913  791960 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:46.95674444 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:46.967022  791960 docker.go:319] overlay module found
	I1115 11:49:46.970300  791960 out.go:179] * Using the docker driver based on user configuration
	I1115 11:49:46.973178  791960 start.go:309] selected driver: docker
	I1115 11:49:46.973203  791960 start.go:930] validating driver "docker" against <nil>
	I1115 11:49:46.973223  791960 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:49:46.974019  791960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:49:47.065978  791960 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:49:47.055906364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:49:47.066133  791960 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1115 11:49:47.066157  791960 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1115 11:49:47.066374  791960 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 11:49:47.069870  791960 out.go:179] * Using Docker driver with root privileges
	I1115 11:49:47.073045  791960 cni.go:84] Creating CNI manager for ""
	I1115 11:49:47.073115  791960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:47.073128  791960 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:49:47.073221  791960 start.go:353] cluster config:
	{Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:49:47.076497  791960 out.go:179] * Starting "newest-cni-600818" primary control-plane node in "newest-cni-600818" cluster
	I1115 11:49:47.079384  791960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:49:47.082327  791960 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:49:47.085018  791960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:47.085043  791960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:49:47.085072  791960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:49:47.085083  791960 cache.go:65] Caching tarball of preloaded images
	I1115 11:49:47.085160  791960 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:49:47.085169  791960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:49:47.085293  791960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json ...
	I1115 11:49:47.085317  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json: {Name:mk7de3b3a8d810d2120ca1d552d370332a21b889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:47.109969  791960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:49:47.109993  791960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:49:47.110007  791960 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:49:47.110034  791960 start.go:360] acquireMachinesLock for newest-cni-600818: {Name:mkadfb381b8085c410b4f5d50b3173a97fec4ebd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:49:47.110143  791960 start.go:364] duration metric: took 89.019µs to acquireMachinesLock for "newest-cni-600818"
	I1115 11:49:47.110167  791960 start.go:93] Provisioning new machine with config: &{Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:49:47.110243  791960 start.go:125] createHost starting for "" (driver="docker")
	I1115 11:49:48.120047  787845 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502704159s
	I1115 11:49:48.154135  787845 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:49:48.170821  787845 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:49:48.185288  787845 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:49:48.185499  787845 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-126380 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:49:48.208541  787845 kubeadm.go:319] [bootstrap-token] Using token: wrmliq.1xiul888wuvtqxks
	I1115 11:49:48.211733  787845 out.go:252]   - Configuring RBAC rules ...
	I1115 11:49:48.211859  787845 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 11:49:48.220083  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 11:49:48.233848  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 11:49:48.239669  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 11:49:48.245263  787845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 11:49:48.250200  787845 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 11:49:48.527960  787845 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 11:49:49.039736  787845 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 11:49:49.526983  787845 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 11:49:49.528427  787845 kubeadm.go:319] 
	I1115 11:49:49.528504  787845 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 11:49:49.528510  787845 kubeadm.go:319] 
	I1115 11:49:49.528591  787845 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 11:49:49.528600  787845 kubeadm.go:319] 
	I1115 11:49:49.528626  787845 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 11:49:49.529130  787845 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 11:49:49.529200  787845 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 11:49:49.529214  787845 kubeadm.go:319] 
	I1115 11:49:49.529272  787845 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 11:49:49.529276  787845 kubeadm.go:319] 
	I1115 11:49:49.529326  787845 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 11:49:49.529331  787845 kubeadm.go:319] 
	I1115 11:49:49.529385  787845 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 11:49:49.529463  787845 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 11:49:49.529535  787845 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 11:49:49.529539  787845 kubeadm.go:319] 
	I1115 11:49:49.529844  787845 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 11:49:49.529933  787845 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 11:49:49.529938  787845 kubeadm.go:319] 
	I1115 11:49:49.530232  787845 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wrmliq.1xiul888wuvtqxks \
	I1115 11:49:49.530347  787845 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 11:49:49.530545  787845 kubeadm.go:319] 	--control-plane 
	I1115 11:49:49.530556  787845 kubeadm.go:319] 
	I1115 11:49:49.530837  787845 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 11:49:49.530847  787845 kubeadm.go:319] 
	I1115 11:49:49.531122  787845 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wrmliq.1xiul888wuvtqxks \
	I1115 11:49:49.531420  787845 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 11:49:49.536492  787845 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 11:49:49.536744  787845 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 11:49:49.536914  787845 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 11:49:49.536943  787845 cni.go:84] Creating CNI manager for ""
	I1115 11:49:49.536951  787845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:49:49.541543  787845 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 11:49:49.544694  787845 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 11:49:49.549712  787845 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 11:49:49.549780  787845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 11:49:49.570796  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 11:49:50.017006  787845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 11:49:50.017166  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:50.017257  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-126380 minikube.k8s.io/updated_at=2025_11_15T11_49_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=no-preload-126380 minikube.k8s.io/primary=true
	I1115 11:49:50.393141  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:50.393201  787845 ops.go:34] apiserver oom_adj: -16
	I1115 11:49:50.894257  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:47.114529  791960 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:49:47.114823  791960 start.go:159] libmachine.API.Create for "newest-cni-600818" (driver="docker")
	I1115 11:49:47.114866  791960 client.go:173] LocalClient.Create starting
	I1115 11:49:47.114955  791960 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:49:47.114987  791960 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:47.115002  791960 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:47.115053  791960 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:49:47.115071  791960 main.go:143] libmachine: Decoding PEM data...
	I1115 11:49:47.115085  791960 main.go:143] libmachine: Parsing certificate...
	I1115 11:49:47.115474  791960 cli_runner.go:164] Run: docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:49:47.138330  791960 cli_runner.go:211] docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:49:47.138413  791960 network_create.go:284] running [docker network inspect newest-cni-600818] to gather additional debugging logs...
	I1115 11:49:47.138431  791960 cli_runner.go:164] Run: docker network inspect newest-cni-600818
	W1115 11:49:47.161799  791960 cli_runner.go:211] docker network inspect newest-cni-600818 returned with exit code 1
	I1115 11:49:47.161836  791960 network_create.go:287] error running [docker network inspect newest-cni-600818]: docker network inspect newest-cni-600818: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-600818 not found
	I1115 11:49:47.161850  791960 network_create.go:289] output of [docker network inspect newest-cni-600818]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-600818 not found
	
	** /stderr **
	I1115 11:49:47.161962  791960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:49:47.191336  791960 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:49:47.191863  791960 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:49:47.192403  791960 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:49:47.192985  791960 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196d8a0}
	I1115 11:49:47.193010  791960 network_create.go:124] attempt to create docker network newest-cni-600818 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 11:49:47.193065  791960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-600818 newest-cni-600818
	I1115 11:49:47.261629  791960 network_create.go:108] docker network newest-cni-600818 192.168.76.0/24 created
	I1115 11:49:47.261658  791960 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-600818" container
	I1115 11:49:47.261729  791960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:49:47.279970  791960 cli_runner.go:164] Run: docker volume create newest-cni-600818 --label name.minikube.sigs.k8s.io=newest-cni-600818 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:49:47.300475  791960 oci.go:103] Successfully created a docker volume newest-cni-600818
	I1115 11:49:47.300553  791960 cli_runner.go:164] Run: docker run --rm --name newest-cni-600818-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-600818 --entrypoint /usr/bin/test -v newest-cni-600818:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:49:47.911996  791960 oci.go:107] Successfully prepared a docker volume newest-cni-600818
	I1115 11:49:47.912067  791960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:49:47.912077  791960 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 11:49:47.912140  791960 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-600818:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 11:49:51.394101  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:51.893498  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:52.394087  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:52.893253  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:53.393659  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:53.893567  787845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:49:54.288687  787845 kubeadm.go:1114] duration metric: took 4.271573473s to wait for elevateKubeSystemPrivileges
	I1115 11:49:54.288715  787845 kubeadm.go:403] duration metric: took 25.131922265s to StartCluster
	I1115 11:49:54.288733  787845 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:54.288793  787845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:49:54.289531  787845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:49:54.289751  787845 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:49:54.289834  787845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 11:49:54.290054  787845 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:54.290085  787845 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:49:54.290141  787845 addons.go:70] Setting storage-provisioner=true in profile "no-preload-126380"
	I1115 11:49:54.290155  787845 addons.go:239] Setting addon storage-provisioner=true in "no-preload-126380"
	I1115 11:49:54.290176  787845 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:49:54.290649  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:54.291036  787845 addons.go:70] Setting default-storageclass=true in profile "no-preload-126380"
	I1115 11:49:54.291057  787845 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-126380"
	I1115 11:49:54.291315  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:54.293097  787845 out.go:179] * Verifying Kubernetes components...
	I1115 11:49:54.299816  787845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:49:54.326966  787845 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:49:54.330992  787845 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:49:54.331015  787845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:49:54.331084  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:54.337815  787845 addons.go:239] Setting addon default-storageclass=true in "no-preload-126380"
	I1115 11:49:54.337856  787845 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:49:54.338262  787845 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:49:54.366469  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:54.377154  787845 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:49:54.377174  787845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:49:54.377247  787845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:49:54.405387  787845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:49:54.570701  787845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 11:49:54.612680  787845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:49:54.627529  787845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:49:54.647554  787845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:49:54.960730  787845 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 11:49:54.962538  787845 node_ready.go:35] waiting up to 6m0s for node "no-preload-126380" to be "Ready" ...
	I1115 11:49:55.417759  787845 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 11:49:55.420763  787845 addons.go:515] duration metric: took 1.130654871s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 11:49:55.471490  787845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-126380" context rescaled to 1 replicas
	I1115 11:49:52.809709  791960 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-600818:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.897535012s)
	I1115 11:49:52.809739  791960 kic.go:203] duration metric: took 4.897658376s to extract preloaded images to volume ...
	W1115 11:49:52.809902  791960 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:49:52.810004  791960 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:49:52.903070  791960 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-600818 --name newest-cni-600818 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-600818 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-600818 --network newest-cni-600818 --ip 192.168.76.2 --volume newest-cni-600818:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:49:53.267451  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Running}}
	I1115 11:49:53.295283  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:49:53.325939  791960 cli_runner.go:164] Run: docker exec newest-cni-600818 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:49:53.382416  791960 oci.go:144] the created container "newest-cni-600818" has a running status.
	I1115 11:49:53.382456  791960 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa...
	I1115 11:49:54.016354  791960 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:49:54.042890  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:49:54.082878  791960 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:49:54.082901  791960 kic_runner.go:114] Args: [docker exec --privileged newest-cni-600818 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:49:54.162255  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:49:54.189282  791960 machine.go:94] provisionDockerMachine start ...
	I1115 11:49:54.189374  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:54.214336  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:54.214666  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:54.214681  791960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:49:54.215215  791960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54842->127.0.0.1:33824: read: connection reset by peer
	W1115 11:49:56.966205  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	W1115 11:49:59.466721  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	I1115 11:49:57.380583  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:49:57.380664  791960 ubuntu.go:182] provisioning hostname "newest-cni-600818"
	I1115 11:49:57.380770  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:57.407442  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:57.407761  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:57.407778  791960 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-600818 && echo "newest-cni-600818" | sudo tee /etc/hostname
	I1115 11:49:57.579186  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:49:57.579291  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:57.602553  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:57.602873  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:57.602896  791960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-600818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-600818/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-600818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:49:57.761609  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:49:57.761661  791960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:49:57.761685  791960 ubuntu.go:190] setting up certificates
	I1115 11:49:57.761696  791960 provision.go:84] configureAuth start
	I1115 11:49:57.761773  791960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:49:57.792503  791960 provision.go:143] copyHostCerts
	I1115 11:49:57.792580  791960 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:49:57.792595  791960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:49:57.792668  791960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:49:57.792762  791960 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:49:57.792772  791960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:49:57.792799  791960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:49:57.792852  791960 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:49:57.792965  791960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:49:57.793001  791960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:49:57.793076  791960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.newest-cni-600818 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-600818]
	I1115 11:49:58.427067  791960 provision.go:177] copyRemoteCerts
	I1115 11:49:58.427198  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:49:58.427273  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:58.446723  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:58.553341  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:49:58.571796  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:49:58.597903  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:49:58.618340  791960 provision.go:87] duration metric: took 856.616016ms to configureAuth
	I1115 11:49:58.618407  791960 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:49:58.618617  791960 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:49:58.618770  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:58.637559  791960 main.go:143] libmachine: Using SSH client type: native
	I1115 11:49:58.637953  791960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 11:49:58.637973  791960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:49:58.934751  791960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:49:58.934817  791960 machine.go:97] duration metric: took 4.745512604s to provisionDockerMachine
	I1115 11:49:58.934840  791960 client.go:176] duration metric: took 11.819967418s to LocalClient.Create
	I1115 11:49:58.934874  791960 start.go:167] duration metric: took 11.820053294s to libmachine.API.Create "newest-cni-600818"
	I1115 11:49:58.934895  791960 start.go:293] postStartSetup for "newest-cni-600818" (driver="docker")
	I1115 11:49:58.934919  791960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:49:58.935000  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:49:58.935077  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:58.954566  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.065936  791960 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:49:59.069773  791960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:49:59.069818  791960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:49:59.069830  791960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:49:59.069893  791960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:49:59.069993  791960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:49:59.070099  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:49:59.078650  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:49:59.096177  791960 start.go:296] duration metric: took 161.254344ms for postStartSetup
	I1115 11:49:59.096550  791960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:49:59.114717  791960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json ...
	I1115 11:49:59.115018  791960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:49:59.115070  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:59.135596  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.244729  791960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:49:59.249644  791960 start.go:128] duration metric: took 12.139386047s to createHost
	I1115 11:49:59.249666  791960 start.go:83] releasing machines lock for "newest-cni-600818", held for 12.139514795s
	I1115 11:49:59.249736  791960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:49:59.273490  791960 ssh_runner.go:195] Run: cat /version.json
	I1115 11:49:59.273551  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:59.273778  791960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:49:59.273841  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:49:59.300748  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.325339  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:49:59.408538  791960 ssh_runner.go:195] Run: systemctl --version
	I1115 11:49:59.509635  791960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:49:59.547425  791960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:49:59.552680  791960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:49:59.552757  791960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:49:59.582761  791960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:49:59.582797  791960 start.go:496] detecting cgroup driver to use...
	I1115 11:49:59.582830  791960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:49:59.582883  791960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:49:59.602735  791960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:49:59.616030  791960 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:49:59.616150  791960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:49:59.634324  791960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:49:59.652385  791960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:49:59.788158  791960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:49:59.915687  791960 docker.go:234] disabling docker service ...
	I1115 11:49:59.915767  791960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:49:59.937309  791960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:49:59.950901  791960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:50:00.220648  791960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:50:00.527980  791960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:50:00.550711  791960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:50:00.582026  791960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:50:00.582104  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.594412  791960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:50:00.594542  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.606808  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.617986  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.631266  791960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:50:00.642683  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.654360  791960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.672604  791960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:00.683717  791960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:50:00.693618  791960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:50:00.703953  791960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:00.840808  791960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:50:01.129015  791960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:50:01.129107  791960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:50:01.133399  791960 start.go:564] Will wait 60s for crictl version
	I1115 11:50:01.133465  791960 ssh_runner.go:195] Run: which crictl
	I1115 11:50:01.137593  791960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:50:01.171478  791960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:50:01.171616  791960 ssh_runner.go:195] Run: crio --version
	I1115 11:50:01.203073  791960 ssh_runner.go:195] Run: crio --version
	I1115 11:50:01.242602  791960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:50:01.245507  791960 cli_runner.go:164] Run: docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:50:01.266013  791960 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:50:01.271310  791960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:01.286397  791960 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 11:50:01.289354  791960 kubeadm.go:884] updating cluster {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:50:01.289562  791960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:01.289668  791960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:01.325933  791960 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:01.325957  791960 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:50:01.326050  791960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:01.356342  791960 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:01.356366  791960 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:50:01.356375  791960 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:50:01.356517  791960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-600818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:50:01.356630  791960 ssh_runner.go:195] Run: crio config
	I1115 11:50:01.428703  791960 cni.go:84] Creating CNI manager for ""
	I1115 11:50:01.428731  791960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:01.428748  791960 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 11:50:01.428793  791960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-600818 NodeName:newest-cni-600818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:50:01.429079  791960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-600818"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:50:01.429180  791960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:50:01.438227  791960 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:50:01.438303  791960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:50:01.452688  791960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:50:01.468378  791960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:50:01.482279  791960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1115 11:50:01.496724  791960 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:50:01.500792  791960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:01.512054  791960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:01.630116  791960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:01.647239  791960 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818 for IP: 192.168.76.2
	I1115 11:50:01.647260  791960 certs.go:195] generating shared ca certs ...
	I1115 11:50:01.647277  791960 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:01.647425  791960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:50:01.647476  791960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:50:01.647487  791960 certs.go:257] generating profile certs ...
	I1115 11:50:01.647555  791960 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key
	I1115 11:50:01.647574  791960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.crt with IP's: []
	I1115 11:50:01.919017  791960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.crt ...
	I1115 11:50:01.919050  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.crt: {Name:mk84da1a564d90a292e833d8d7f924ee29584c8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:01.919328  791960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key ...
	I1115 11:50:01.919343  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key: {Name:mk5a8aee48f197cf42f8f8a6d14ba2e1baa11bc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:01.919439  791960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42
	I1115 11:50:01.919455  791960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 11:50:02.940383  791960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42 ...
	I1115 11:50:02.940414  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42: {Name:mk09ada8da635957818d702e0257f698e34f4b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:02.940599  791960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42 ...
	I1115 11:50:02.940613  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42: {Name:mk3be3027fd498f5c96a9fe43585aeb99ea2dc6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:02.940705  791960 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt.a60e7b42 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt
	I1115 11:50:02.940791  791960 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42 -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key
	I1115 11:50:02.940875  791960 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key
	I1115 11:50:02.940893  791960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt with IP's: []
	I1115 11:50:03.747651  791960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt ...
	I1115 11:50:03.747685  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt: {Name:mk91a17b2444f6eb3a03908e0dd6639d785e5cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:03.747905  791960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key ...
	I1115 11:50:03.747921  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key: {Name:mk39c1a57ff08354049bdb83c2052ef06cf4d0f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:03.748110  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:50:03.748159  791960 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:50:03.748171  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:50:03.748197  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:50:03.748232  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:50:03.748257  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:50:03.748304  791960 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:03.748896  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:50:03.767572  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:50:03.786531  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:50:03.807360  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:50:03.828709  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:50:03.848181  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:50:03.867570  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:50:03.884673  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 11:50:03.903301  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:50:03.924346  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:50:03.945059  791960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:50:03.970961  791960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:50:03.987349  791960 ssh_runner.go:195] Run: openssl version
	I1115 11:50:03.994201  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:50:04.005680  791960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:50:04.011050  791960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:50:04.011169  791960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:50:04.053168  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:50:04.061954  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:50:04.070605  791960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:50:04.074630  791960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:50:04.074707  791960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:50:04.117878  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:50:04.126371  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:50:04.135167  791960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:04.139401  791960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:04.139475  791960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:04.180969  791960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:50:04.189784  791960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:50:04.193276  791960 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:50:04.193328  791960 kubeadm.go:401] StartCluster: {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:04.193419  791960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:50:04.193473  791960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:50:04.228953  791960 cri.go:89] found id: ""
	I1115 11:50:04.229034  791960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:50:04.243274  791960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:50:04.251097  791960 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:50:04.251212  791960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:50:04.261797  791960 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:50:04.261816  791960 kubeadm.go:158] found existing configuration files:
	
	I1115 11:50:04.261894  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:50:04.271250  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:50:04.271333  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:50:04.279073  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:50:04.287572  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:50:04.287637  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:50:04.295445  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:50:04.303281  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:50:04.303382  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:50:04.310828  791960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:50:04.320161  791960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:50:04.320290  791960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:50:04.328408  791960 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:50:04.374656  791960 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 11:50:04.375000  791960 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:50:04.398165  791960 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:50:04.398245  791960 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:50:04.398286  791960 kubeadm.go:319] OS: Linux
	I1115 11:50:04.398339  791960 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:50:04.398399  791960 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:50:04.398454  791960 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:50:04.398508  791960 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:50:04.398563  791960 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:50:04.398615  791960 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:50:04.398667  791960 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:50:04.398721  791960 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:50:04.398774  791960 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:50:04.473553  791960 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:50:04.473672  791960 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:50:04.473772  791960 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 11:50:04.481196  791960 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 11:50:01.968059  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	W1115 11:50:04.466262  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	I1115 11:50:04.487266  791960 out.go:252]   - Generating certificates and keys ...
	I1115 11:50:04.487412  791960 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:50:04.487498  791960 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:50:05.734767  791960 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:50:06.618928  791960 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1115 11:50:06.966188  787845 node_ready.go:57] node "no-preload-126380" has "Ready":"False" status (will retry)
	I1115 11:50:09.466742  787845 node_ready.go:49] node "no-preload-126380" is "Ready"
	I1115 11:50:09.466766  787845 node_ready.go:38] duration metric: took 14.504197769s for node "no-preload-126380" to be "Ready" ...
	I1115 11:50:09.466779  787845 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:50:09.466840  787845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:50:09.481539  787845 api_server.go:72] duration metric: took 15.191757012s to wait for apiserver process to appear ...
	I1115 11:50:09.481560  787845 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:50:09.481580  787845 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 11:50:09.490856  787845 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 11:50:09.492061  787845 api_server.go:141] control plane version: v1.34.1
	I1115 11:50:09.492083  787845 api_server.go:131] duration metric: took 10.515737ms to wait for apiserver health ...
	I1115 11:50:09.492091  787845 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:50:09.506521  787845 system_pods.go:59] 8 kube-system pods found
	I1115 11:50:09.506555  787845 system_pods.go:61] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:09.506562  787845 system_pods.go:61] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:09.506568  787845 system_pods.go:61] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:09.506572  787845 system_pods.go:61] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:09.506577  787845 system_pods.go:61] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:09.506581  787845 system_pods.go:61] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:09.506585  787845 system_pods.go:61] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:09.506593  787845 system_pods.go:61] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:09.506601  787845 system_pods.go:74] duration metric: took 14.502767ms to wait for pod list to return data ...
	I1115 11:50:09.506609  787845 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:50:09.514282  787845 default_sa.go:45] found service account: "default"
	I1115 11:50:09.514306  787845 default_sa.go:55] duration metric: took 7.690929ms for default service account to be created ...
	I1115 11:50:09.514316  787845 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 11:50:09.522872  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:09.522957  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:09.522981  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:09.523000  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:09.523031  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:09.523055  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:09.523074  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:09.523093  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:09.523129  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:09.523164  787845 retry.go:31] will retry after 297.961286ms: missing components: kube-dns
	I1115 11:50:09.825020  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:09.825072  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:09.825081  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:09.825090  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:09.825095  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:09.825104  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:09.825108  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:09.825111  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:09.825119  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:09.825134  787845 retry.go:31] will retry after 280.658865ms: missing components: kube-dns
	I1115 11:50:10.110813  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:10.110846  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 11:50:10.110853  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:10.110859  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:10.110864  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:10.110869  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:10.110874  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:10.110879  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:10.110884  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 11:50:10.110899  787845 retry.go:31] will retry after 382.962418ms: missing components: kube-dns
	I1115 11:50:10.498594  787845 system_pods.go:86] 8 kube-system pods found
	I1115 11:50:10.498621  787845 system_pods.go:89] "coredns-66bc5c9577-m2hwn" [ff6e9c80-26d2-46ef-8778-38324bb83386] Running
	I1115 11:50:10.498628  787845 system_pods.go:89] "etcd-no-preload-126380" [5f26f710-de87-44df-9c7f-11ba016586d5] Running
	I1115 11:50:10.498632  787845 system_pods.go:89] "kindnet-7vrr2" [c5da489a-d25e-49a1-95b9-c868981a97e8] Running
	I1115 11:50:10.498636  787845 system_pods.go:89] "kube-apiserver-no-preload-126380" [27004b84-770c-487b-8fe0-926dd013d264] Running
	I1115 11:50:10.498641  787845 system_pods.go:89] "kube-controller-manager-no-preload-126380" [05c30b3e-e44d-4daa-afed-99f025d187b8] Running
	I1115 11:50:10.498644  787845 system_pods.go:89] "kube-proxy-zhsz4" [64878ec8-f351-4aa1-b2a9-7a6b5c705fcd] Running
	I1115 11:50:10.498648  787845 system_pods.go:89] "kube-scheduler-no-preload-126380" [f2b8c98f-6984-434e-a28c-747929bb80ae] Running
	I1115 11:50:10.498652  787845 system_pods.go:89] "storage-provisioner" [31e6610d-bf36-4446-8c1f-c0d4cd2563e6] Running
	I1115 11:50:10.498659  787845 system_pods.go:126] duration metric: took 984.336962ms to wait for k8s-apps to be running ...
	I1115 11:50:10.498666  787845 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 11:50:10.498723  787845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:50:10.515426  787845 system_svc.go:56] duration metric: took 16.748935ms WaitForService to wait for kubelet
	I1115 11:50:10.515503  787845 kubeadm.go:587] duration metric: took 16.225726318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:50:10.515539  787845 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:50:10.518944  787845 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:50:10.519019  787845 node_conditions.go:123] node cpu capacity is 2
	I1115 11:50:10.519046  787845 node_conditions.go:105] duration metric: took 3.485149ms to run NodePressure ...
	I1115 11:50:10.519070  787845 start.go:242] waiting for startup goroutines ...
	I1115 11:50:10.519104  787845 start.go:247] waiting for cluster config update ...
	I1115 11:50:10.519131  787845 start.go:256] writing updated cluster config ...
	I1115 11:50:10.519485  787845 ssh_runner.go:195] Run: rm -f paused
	I1115 11:50:10.523693  787845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:50:10.527731  787845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m2hwn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.534046  787845 pod_ready.go:94] pod "coredns-66bc5c9577-m2hwn" is "Ready"
	I1115 11:50:10.534073  787845 pod_ready.go:86] duration metric: took 6.314914ms for pod "coredns-66bc5c9577-m2hwn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.537041  787845 pod_ready.go:83] waiting for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.543341  787845 pod_ready.go:94] pod "etcd-no-preload-126380" is "Ready"
	I1115 11:50:10.543367  787845 pod_ready.go:86] duration metric: took 6.297273ms for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.546430  787845 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.552249  787845 pod_ready.go:94] pod "kube-apiserver-no-preload-126380" is "Ready"
	I1115 11:50:10.552285  787845 pod_ready.go:86] duration metric: took 5.829272ms for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.555177  787845 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:10.929099  787845 pod_ready.go:94] pod "kube-controller-manager-no-preload-126380" is "Ready"
	I1115 11:50:10.929132  787845 pod_ready.go:86] duration metric: took 373.931216ms for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:11.129453  787845 pod_ready.go:83] waiting for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:06.843515  791960 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 11:50:07.600828  791960 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:50:07.922735  791960 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:50:07.923119  791960 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-600818] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:50:08.022142  791960 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:50:08.022609  791960 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-600818] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:50:09.209743  791960 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:50:09.439496  791960 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:50:10.113383  791960 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:50:10.114035  791960 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:50:10.359996  791960 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:50:10.683138  791960 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 11:50:11.635954  791960 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:50:11.800756  791960 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:50:11.528158  787845 pod_ready.go:94] pod "kube-proxy-zhsz4" is "Ready"
	I1115 11:50:11.528202  787845 pod_ready.go:86] duration metric: took 398.719054ms for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:11.729211  787845 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:12.129488  787845 pod_ready.go:94] pod "kube-scheduler-no-preload-126380" is "Ready"
	I1115 11:50:12.129539  787845 pod_ready.go:86] duration metric: took 400.243484ms for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:50:12.129584  787845 pod_ready.go:40] duration metric: took 1.605828488s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:50:12.223277  787845 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:50:12.226822  787845 out.go:179] * Done! kubectl is now configured to use "no-preload-126380" cluster and "default" namespace by default
	I1115 11:50:12.229001  791960 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:50:12.265543  791960 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:50:12.265653  791960 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 11:50:12.272029  791960 out.go:252]   - Booting up control plane ...
	I1115 11:50:12.272194  791960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 11:50:12.272304  791960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 11:50:12.272436  791960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 11:50:12.329703  791960 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 11:50:12.329863  791960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 11:50:12.342188  791960 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 11:50:12.342341  791960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 11:50:12.342390  791960 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 11:50:12.542068  791960 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 11:50:12.542201  791960 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 11:50:15.050037  791960 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.50310232s
	I1115 11:50:15.050154  791960 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 11:50:15.050240  791960 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 11:50:15.050333  791960 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 11:50:15.050414  791960 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 11:50:16.997773  791960 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.948594519s
	I1115 11:50:19.282392  791960 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.23374255s
	I1115 11:50:21.551916  791960 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503005909s
	I1115 11:50:21.582945  791960 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:50:21.600758  791960 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:50:21.617263  791960 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:50:21.617501  791960 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-600818 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:50:21.634848  791960 kubeadm.go:319] [bootstrap-token] Using token: bqu6v3.1anpaql43qcy8ikq
	I1115 11:50:21.637779  791960 out.go:252]   - Configuring RBAC rules ...
	I1115 11:50:21.637906  791960 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 11:50:21.649635  791960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 11:50:21.659486  791960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 11:50:21.670659  791960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 11:50:21.683554  791960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 11:50:21.690839  791960 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 11:50:21.959747  791960 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 11:50:22.416265  791960 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 11:50:22.959480  791960 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 11:50:22.961257  791960 kubeadm.go:319] 
	I1115 11:50:22.961340  791960 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 11:50:22.961351  791960 kubeadm.go:319] 
	I1115 11:50:22.961432  791960 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 11:50:22.961442  791960 kubeadm.go:319] 
	I1115 11:50:22.961468  791960 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 11:50:22.961534  791960 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 11:50:22.961590  791960 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 11:50:22.961599  791960 kubeadm.go:319] 
	I1115 11:50:22.961656  791960 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 11:50:22.961664  791960 kubeadm.go:319] 
	I1115 11:50:22.961715  791960 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 11:50:22.961724  791960 kubeadm.go:319] 
	I1115 11:50:22.961778  791960 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 11:50:22.961861  791960 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 11:50:22.961937  791960 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 11:50:22.961945  791960 kubeadm.go:319] 
	I1115 11:50:22.962034  791960 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 11:50:22.962118  791960 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 11:50:22.962127  791960 kubeadm.go:319] 
	I1115 11:50:22.962216  791960 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bqu6v3.1anpaql43qcy8ikq \
	I1115 11:50:22.962328  791960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 11:50:22.962354  791960 kubeadm.go:319] 	--control-plane 
	I1115 11:50:22.962363  791960 kubeadm.go:319] 
	I1115 11:50:22.962452  791960 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 11:50:22.962461  791960 kubeadm.go:319] 
	I1115 11:50:22.962554  791960 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bqu6v3.1anpaql43qcy8ikq \
	I1115 11:50:22.962665  791960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 11:50:22.968820  791960 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 11:50:22.969297  791960 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 11:50:22.969454  791960 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 11:50:22.969470  791960 cni.go:84] Creating CNI manager for ""
	I1115 11:50:22.969479  791960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:22.972571  791960 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 11:50:22.975501  791960 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 11:50:22.980185  791960 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 11:50:22.980205  791960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 11:50:23.000588  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 11:50:23.551694  791960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 11:50:23.551868  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:50:23.551974  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-600818 minikube.k8s.io/updated_at=2025_11_15T11_50_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=newest-cni-600818 minikube.k8s.io/primary=true
	I1115 11:50:23.882395  791960 ops.go:34] apiserver oom_adj: -16
	I1115 11:50:23.882497  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:50:24.382558  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:50:24.882643  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:50:25.382926  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:50:25.882623  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:50:26.382822  791960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:50:26.480003  791960 kubeadm.go:1114] duration metric: took 2.928168595s to wait for elevateKubeSystemPrivileges
	I1115 11:50:26.480034  791960 kubeadm.go:403] duration metric: took 22.286709589s to StartCluster
	I1115 11:50:26.480052  791960 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:26.480112  791960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:26.481048  791960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:26.481350  791960 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:50:26.481485  791960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 11:50:26.481783  791960 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:26.481824  791960 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:50:26.481887  791960 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-600818"
	I1115 11:50:26.481909  791960 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-600818"
	I1115 11:50:26.481931  791960 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:26.482423  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:26.482693  791960 addons.go:70] Setting default-storageclass=true in profile "newest-cni-600818"
	I1115 11:50:26.482709  791960 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-600818"
	I1115 11:50:26.482955  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:26.485614  791960 out.go:179] * Verifying Kubernetes components...
	I1115 11:50:26.490500  791960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:26.525004  791960 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:50:26.525094  791960 addons.go:239] Setting addon default-storageclass=true in "newest-cni-600818"
	I1115 11:50:26.525130  791960 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:26.525606  791960 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:26.527915  791960 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:26.527940  791960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:50:26.528001  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:26.569023  791960 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:26.569046  791960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:50:26.569108  791960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:26.571357  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:26.602564  791960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:26.695331  791960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 11:50:26.792261  791960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:26.844571  791960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:26.862807  791960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:27.276520  791960 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 11:50:27.277972  791960 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:50:27.278034  791960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:50:27.715254  791960 api_server.go:72] duration metric: took 1.23387305s to wait for apiserver process to appear ...
	I1115 11:50:27.715322  791960 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:50:27.715355  791960 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:50:27.742780  791960 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 11:50:27.748606  791960 api_server.go:141] control plane version: v1.34.1
	I1115 11:50:27.748689  791960 api_server.go:131] duration metric: took 33.34467ms to wait for apiserver health ...
	I1115 11:50:27.748733  791960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:50:27.754957  791960 system_pods.go:59] 8 kube-system pods found
	I1115 11:50:27.755057  791960 system_pods.go:61] "coredns-66bc5c9577-k2pmf" [6eb5cbde-f6a1-4680-ac07-4a2b6e15d42f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 11:50:27.755107  791960 system_pods.go:61] "etcd-newest-cni-600818" [32466f92-ecfd-446f-bfe9-68cf519b2b89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:50:27.755134  791960 system_pods.go:61] "kindnet-bcvw7" [75bd6a1d-29ff-4420-982f-97b36c4b5830] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 11:50:27.755155  791960 system_pods.go:61] "kube-apiserver-newest-cni-600818" [443d9983-0c4e-4303-89ec-1a6e18c316ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:50:27.755178  791960 system_pods.go:61] "kube-controller-manager-newest-cni-600818" [b43750ab-bb60-4d03-8054-ddcd38bc1c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:50:27.755213  791960 system_pods.go:61] "kube-proxy-kms5c" [2446e186-b744-4098-b190-0a98b30804fd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 11:50:27.755245  791960 system_pods.go:61] "kube-scheduler-newest-cni-600818" [be75d8e9-f0e3-419b-85a5-702fd1fc2975] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:50:27.755264  791960 system_pods.go:61] "storage-provisioner" [070b587d-9d48-4f2a-9b68-11cc8e004b8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 11:50:27.755287  791960 system_pods.go:74] duration metric: took 6.518535ms to wait for pod list to return data ...
	I1115 11:50:27.755319  791960 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:50:27.762079  791960 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 11:50:27.765760  791960 addons.go:515] duration metric: took 1.28391442s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 11:50:27.766203  791960 default_sa.go:45] found service account: "default"
	I1115 11:50:27.766220  791960 default_sa.go:55] duration metric: took 10.879302ms for default service account to be created ...
	I1115 11:50:27.766231  791960 kubeadm.go:587] duration metric: took 1.284855312s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 11:50:27.766272  791960 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:50:27.785427  791960 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-600818" context rescaled to 1 replicas
	I1115 11:50:27.786827  791960 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:50:27.786866  791960 node_conditions.go:123] node cpu capacity is 2
	I1115 11:50:27.786878  791960 node_conditions.go:105] duration metric: took 20.600569ms to run NodePressure ...
	I1115 11:50:27.786890  791960 start.go:242] waiting for startup goroutines ...
	I1115 11:50:27.786898  791960 start.go:247] waiting for cluster config update ...
	I1115 11:50:27.786910  791960 start.go:256] writing updated cluster config ...
	I1115 11:50:27.787195  791960 ssh_runner.go:195] Run: rm -f paused
	I1115 11:50:27.879712  791960 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:50:27.883222  791960 out.go:179] * Done! kubectl is now configured to use "newest-cni-600818" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.448567178Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.457671183Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-kms5c/POD" id=9bbd300a-ca22-484f-bf55-17a649ab4ebf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.457751348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.465057829Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=de42d3fc-d454-4454-b280-eaaecbf28530 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.496513289Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9bbd300a-ca22-484f-bf55-17a649ab4ebf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.501248699Z" level=info msg="Ran pod sandbox 4564f90999abf7c38e2ea63d37a70b3c3700c20f025d358eedd43cc5ed256f7e with infra container: kube-system/kindnet-bcvw7/POD" id=de42d3fc-d454-4454-b280-eaaecbf28530 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.502930538Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ffb9463a-0af8-425f-8c50-4c3bfd116546 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.510407425Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4221860f-7000-41b0-9cb2-eebc444d905e name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.528478745Z" level=info msg="Ran pod sandbox 32190e593179d4234a404e2033db2671f02f38bdffb7c702d167b7e675fdf338 with infra container: kube-system/kube-proxy-kms5c/POD" id=9bbd300a-ca22-484f-bf55-17a649ab4ebf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.536925931Z" level=info msg="Creating container: kube-system/kindnet-bcvw7/kindnet-cni" id=20d28817-a0f9-4100-9139-31fdfeaf98d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.537167435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.541155637Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=397d05e9-82cc-492e-972b-0b4848f4eaeb name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.54462348Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a1948cee-bb26-46ba-814d-d26cfc833405 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.550335696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.550655823Z" level=info msg="Creating container: kube-system/kube-proxy-kms5c/kube-proxy" id=c5f83a80-0cdf-437c-975b-75d9fdfca35b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.550773625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.551260703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.571416082Z" level=info msg="Created container 6e49ddc61f0bd2d1b6f674977473c6840f8e7ec2f5e788399d87aa9bc8913df9: kube-system/kindnet-bcvw7/kindnet-cni" id=20d28817-a0f9-4100-9139-31fdfeaf98d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.574893148Z" level=info msg="Starting container: 6e49ddc61f0bd2d1b6f674977473c6840f8e7ec2f5e788399d87aa9bc8913df9" id=e148d33e-03b2-49db-9353-6c3875fd9f5a name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.576520792Z" level=info msg="Started container" PID=1481 containerID=6e49ddc61f0bd2d1b6f674977473c6840f8e7ec2f5e788399d87aa9bc8913df9 description=kube-system/kindnet-bcvw7/kindnet-cni id=e148d33e-03b2-49db-9353-6c3875fd9f5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=4564f90999abf7c38e2ea63d37a70b3c3700c20f025d358eedd43cc5ed256f7e
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.591317665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.599642355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.632787573Z" level=info msg="Created container 63a8981236b4c9f5855f020f5551f26589d61ef57b0218cf00309cdab51eb696: kube-system/kube-proxy-kms5c/kube-proxy" id=c5f83a80-0cdf-437c-975b-75d9fdfca35b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.634033568Z" level=info msg="Starting container: 63a8981236b4c9f5855f020f5551f26589d61ef57b0218cf00309cdab51eb696" id=7e7e39b3-b488-43ac-8e09-645925216419 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:27 newest-cni-600818 crio[838]: time="2025-11-15T11:50:27.639749295Z" level=info msg="Started container" PID=1491 containerID=63a8981236b4c9f5855f020f5551f26589d61ef57b0218cf00309cdab51eb696 description=kube-system/kube-proxy-kms5c/kube-proxy id=7e7e39b3-b488-43ac-8e09-645925216419 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32190e593179d4234a404e2033db2671f02f38bdffb7c702d167b7e675fdf338
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	63a8981236b4c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   32190e593179d       kube-proxy-kms5c                            kube-system
	6e49ddc61f0bd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   4564f90999abf       kindnet-bcvw7                               kube-system
	4deaeafe5a77d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            0                   2779add54f1a3       kube-scheduler-newest-cni-600818            kube-system
	be04ba48559b3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      0                   e8e4187f819f5       etcd-newest-cni-600818                      kube-system
	82fc90cd49142       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            0                   749375779c742       kube-apiserver-newest-cni-600818            kube-system
	13503a1c270d5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   0                   f2ef3e0c3e7b2       kube-controller-manager-newest-cni-600818   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-600818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-600818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=newest-cni-600818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_50_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:50:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-600818
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:50:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:50:22 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:50:22 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:50:22 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 11:50:22 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-600818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c022f560-be97-45fe-81fb-2d2f59506bb6
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-600818                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-bcvw7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-600818             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-600818    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-kms5c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-600818             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-600818 event: Registered Node newest-cni-600818 in Controller
	
	
	==> dmesg <==
	[Nov15 11:28] overlayfs: idmapped layers are currently not supported
	[ +23.116625] overlayfs: idmapped layers are currently not supported
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	[Nov15 11:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [be04ba48559b3ed7f39fc93029d1da168f0598ab7609a09174fef3200f57bf7e] <==
	{"level":"warn","ts":"2025-11-15T11:50:17.846199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.867263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.885633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.903271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.920372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.945659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.969661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.978957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:17.998063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.016420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.055262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.069870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.086151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.106628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.122196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.138909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.161334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.181306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.202146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.216837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.243193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.277448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.329271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.341887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:18.439118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38214","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:50:29 up  3:32,  0 user,  load average: 3.28, 3.31, 2.90
	Linux newest-cni-600818 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e49ddc61f0bd2d1b6f674977473c6840f8e7ec2f5e788399d87aa9bc8913df9] <==
	I1115 11:50:27.694167       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:50:27.694610       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 11:50:27.694808       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:50:27.694863       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:50:27.694906       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:50:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:50:27.894376       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:50:27.894460       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:50:27.894498       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:50:27.895401       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [82fc90cd49142807b9580ea5a999d55baff5705ce3420d7c82539fa1edde3704] <==
	I1115 11:50:19.283955       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:50:19.284210       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:50:19.314884       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:50:19.315114       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:50:19.338920       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 11:50:19.340880       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:50:19.372115       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:50:19.372914       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 11:50:20.088388       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 11:50:20.104298       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 11:50:20.104328       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:50:21.029927       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:50:21.094828       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:50:21.196792       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 11:50:21.207396       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1115 11:50:21.208693       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:50:21.215217       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:50:21.238714       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:50:22.379248       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:50:22.414155       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 11:50:22.436523       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 11:50:27.082860       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 11:50:27.158305       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:50:27.304272       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:50:27.342298       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [13503a1c270d531625f665124388296fff1abb72f4402559c20851a94294cdc1] <==
	I1115 11:50:26.259796       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:50:26.259880       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 11:50:26.259948       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 11:50:26.260020       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 11:50:26.260057       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 11:50:26.260106       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 11:50:26.254736       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 11:50:26.261757       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:50:26.280380       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 11:50:26.280853       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 11:50:26.280963       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 11:50:26.281138       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:50:26.281208       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:50:26.282254       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-600818" podCIDRs=["10.42.0.0/24"]
	I1115 11:50:26.282338       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:50:26.282382       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:50:26.282618       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:50:26.282654       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:50:26.282396       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:50:26.286585       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:50:26.290554       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 11:50:26.292420       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:50:26.293876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:50:26.308937       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 11:50:26.313594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [63a8981236b4c9f5855f020f5551f26589d61ef57b0218cf00309cdab51eb696] <==
	I1115 11:50:27.738461       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:50:27.863077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:50:27.975693       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:50:27.975741       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 11:50:27.975851       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:50:28.093192       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:50:28.093267       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:50:28.099123       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:50:28.099923       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:50:28.099952       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:28.103622       1 config.go:200] "Starting service config controller"
	I1115 11:50:28.103651       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:50:28.103766       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:50:28.103779       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:50:28.103965       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:50:28.103977       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:50:28.107340       1 config.go:309] "Starting node config controller"
	I1115 11:50:28.109148       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:50:28.109169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:50:28.204257       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:50:28.204299       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:50:28.204337       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4deaeafe5a77d927cc9e8a301491495fdd3a0938d1de04abeb7fa10d5fab3255] <==
	E1115 11:50:19.277483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:50:19.277563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:50:19.277724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:50:19.285619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:50:19.285767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:50:19.285870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:50:19.286034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:50:19.286133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:50:19.286250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:50:19.286336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:50:20.090091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:50:20.118116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:50:20.144939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:50:20.162734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 11:50:20.204993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:50:20.265838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:50:20.314213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:50:20.325965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:50:20.339746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:50:20.342443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:50:20.435180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:50:20.490561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:50:20.541677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:50:20.560848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1115 11:50:22.370617       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.842235    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088c9662c0234903510544a6220ff69e-kubeconfig\") pod \"kube-controller-manager-newest-cni-600818\" (UID: \"088c9662c0234903510544a6220ff69e\") " pod="kube-system/kube-controller-manager-newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.842252    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69fbf685f839fe3e1e79e51f50fd76b7-kubeconfig\") pod \"kube-scheduler-newest-cni-600818\" (UID: \"69fbf685f839fe3e1e79e51f50fd76b7\") " pod="kube-system/kube-scheduler-newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.842279    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3bde4a8b045947fc1901640d7646833-etc-ca-certificates\") pod \"kube-apiserver-newest-cni-600818\" (UID: \"d3bde4a8b045947fc1901640d7646833\") " pod="kube-system/kube-apiserver-newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.842305    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3bde4a8b045947fc1901640d7646833-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-600818\" (UID: \"d3bde4a8b045947fc1901640d7646833\") " pod="kube-system/kube-apiserver-newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.842323    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088c9662c0234903510544a6220ff69e-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-600818\" (UID: \"088c9662c0234903510544a6220ff69e\") " pod="kube-system/kube-controller-manager-newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.842341    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3bde4a8b045947fc1901640d7646833-ca-certs\") pod \"kube-apiserver-newest-cni-600818\" (UID: \"d3bde4a8b045947fc1901640d7646833\") " pod="kube-system/kube-apiserver-newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.852840    1313 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.864598    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-600818" podStartSLOduration=0.86456833 podStartE2EDuration="864.56833ms" podCreationTimestamp="2025-11-15 11:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:50:22.845112454 +0000 UTC m=+0.553294344" watchObservedRunningTime="2025-11-15 11:50:22.86456833 +0000 UTC m=+0.572750204"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.884083    1313 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-600818"
	Nov 15 11:50:22 newest-cni-600818 kubelet[1313]: I1115 11:50:22.884247    1313 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-600818"
	Nov 15 11:50:23 newest-cni-600818 kubelet[1313]: I1115 11:50:23.079119    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-600818" podStartSLOduration=1.079089272 podStartE2EDuration="1.079089272s" podCreationTimestamp="2025-11-15 11:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:50:22.865289839 +0000 UTC m=+0.573471729" watchObservedRunningTime="2025-11-15 11:50:23.079089272 +0000 UTC m=+0.787271138"
	Nov 15 11:50:26 newest-cni-600818 kubelet[1313]: I1115 11:50:26.320252    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 11:50:26 newest-cni-600818 kubelet[1313]: I1115 11:50:26.321310    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189788    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-cni-cfg\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189840    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2hj7\" (UniqueName: \"kubernetes.io/projected/75bd6a1d-29ff-4420-982f-97b36c4b5830-kube-api-access-x2hj7\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189869    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2446e186-b744-4098-b190-0a98b30804fd-xtables-lock\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189897    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfclh\" (UniqueName: \"kubernetes.io/projected/2446e186-b744-4098-b190-0a98b30804fd-kube-api-access-sfclh\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189918    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-lib-modules\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189936    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2446e186-b744-4098-b190-0a98b30804fd-lib-modules\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189956    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-xtables-lock\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.189972    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2446e186-b744-4098-b190-0a98b30804fd-kube-proxy\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.318802    1313 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: W1115 11:50:27.487335    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/crio-4564f90999abf7c38e2ea63d37a70b3c3700c20f025d358eedd43cc5ed256f7e WatchSource:0}: Error finding container 4564f90999abf7c38e2ea63d37a70b3c3700c20f025d358eedd43cc5ed256f7e: Status 404 returned error can't find the container with id 4564f90999abf7c38e2ea63d37a70b3c3700c20f025d358eedd43cc5ed256f7e
	Nov 15 11:50:27 newest-cni-600818 kubelet[1313]: I1115 11:50:27.818138    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kms5c" podStartSLOduration=0.818118216 podStartE2EDuration="818.118216ms" podCreationTimestamp="2025-11-15 11:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:50:27.760538748 +0000 UTC m=+5.468720622" watchObservedRunningTime="2025-11-15 11:50:27.818118216 +0000 UTC m=+5.526300090"
	Nov 15 11:50:29 newest-cni-600818 kubelet[1313]: I1115 11:50:29.632254    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bcvw7" podStartSLOduration=2.6322360529999997 podStartE2EDuration="2.632236053s" podCreationTimestamp="2025-11-15 11:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 11:50:27.819328247 +0000 UTC m=+5.527510130" watchObservedRunningTime="2025-11-15 11:50:29.632236053 +0000 UTC m=+7.340417927"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-600818 -n newest-cni-600818
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-600818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-k2pmf storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner: exit status 1 (81.894151ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-k2pmf" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-600818 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-600818 --alsologtostderr -v=1: exit status 80 (2.600639149s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-600818 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:50:52.023939  799612 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:50:52.024170  799612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:50:52.024207  799612 out.go:374] Setting ErrFile to fd 2...
	I1115 11:50:52.024228  799612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:50:52.024512  799612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:50:52.024789  799612 out.go:368] Setting JSON to false
	I1115 11:50:52.024847  799612 mustload.go:66] Loading cluster: newest-cni-600818
	I1115 11:50:52.025307  799612 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:52.025857  799612 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:52.070523  799612 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:52.070875  799612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:50:52.201490  799612 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-15 11:50:52.188930602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:50:52.202135  799612 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-600818 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 11:50:52.205589  799612 out.go:179] * Pausing node newest-cni-600818 ... 
	I1115 11:50:52.209876  799612 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:52.210212  799612 ssh_runner.go:195] Run: systemctl --version
	I1115 11:50:52.210263  799612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:52.252437  799612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:52.391170  799612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:50:52.426013  799612 pause.go:52] kubelet running: true
	I1115 11:50:52.426078  799612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:50:52.861489  799612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:50:52.861570  799612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:50:53.023538  799612 cri.go:89] found id: "508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb"
	I1115 11:50:53.023566  799612 cri.go:89] found id: "6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755"
	I1115 11:50:53.023572  799612 cri.go:89] found id: "645246f7825202338380cb5d10ceb9da92cdfc53e1f942510d2442a0fd84a097"
	I1115 11:50:53.023576  799612 cri.go:89] found id: "fd7399c25f9e0b5ec2bd454e0007c03228ee3b5f4d4bf00dc22c645038b07897"
	I1115 11:50:53.023580  799612 cri.go:89] found id: "11203d5b5b35660740cae26a3b2082fe96faeca680bc0c57a5eb2ba26511cba1"
	I1115 11:50:53.023623  799612 cri.go:89] found id: "80865ff5e22d408a46025735f288fbc8807cecdd6680ae8eadc50da5c41cd3e6"
	I1115 11:50:53.023636  799612 cri.go:89] found id: ""
	I1115 11:50:53.023724  799612 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:50:53.036882  799612 retry.go:31] will retry after 330.824885ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:53Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:50:53.368429  799612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:50:53.390761  799612 pause.go:52] kubelet running: false
	I1115 11:50:53.390879  799612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:50:53.661049  799612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:50:53.661224  799612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:50:53.842200  799612 cri.go:89] found id: "508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb"
	I1115 11:50:53.842224  799612 cri.go:89] found id: "6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755"
	I1115 11:50:53.842230  799612 cri.go:89] found id: "645246f7825202338380cb5d10ceb9da92cdfc53e1f942510d2442a0fd84a097"
	I1115 11:50:53.842235  799612 cri.go:89] found id: "fd7399c25f9e0b5ec2bd454e0007c03228ee3b5f4d4bf00dc22c645038b07897"
	I1115 11:50:53.842238  799612 cri.go:89] found id: "11203d5b5b35660740cae26a3b2082fe96faeca680bc0c57a5eb2ba26511cba1"
	I1115 11:50:53.842242  799612 cri.go:89] found id: "80865ff5e22d408a46025735f288fbc8807cecdd6680ae8eadc50da5c41cd3e6"
	I1115 11:50:53.842245  799612 cri.go:89] found id: ""
	I1115 11:50:53.842321  799612 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:50:53.859652  799612 retry.go:31] will retry after 220.272695ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:53Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:50:54.081098  799612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:50:54.096390  799612 pause.go:52] kubelet running: false
	I1115 11:50:54.096489  799612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:50:54.343265  799612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:50:54.343425  799612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:50:54.499475  799612 cri.go:89] found id: "508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb"
	I1115 11:50:54.499497  799612 cri.go:89] found id: "6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755"
	I1115 11:50:54.499503  799612 cri.go:89] found id: "645246f7825202338380cb5d10ceb9da92cdfc53e1f942510d2442a0fd84a097"
	I1115 11:50:54.499507  799612 cri.go:89] found id: "fd7399c25f9e0b5ec2bd454e0007c03228ee3b5f4d4bf00dc22c645038b07897"
	I1115 11:50:54.499510  799612 cri.go:89] found id: "11203d5b5b35660740cae26a3b2082fe96faeca680bc0c57a5eb2ba26511cba1"
	I1115 11:50:54.499514  799612 cri.go:89] found id: "80865ff5e22d408a46025735f288fbc8807cecdd6680ae8eadc50da5c41cd3e6"
	I1115 11:50:54.499518  799612 cri.go:89] found id: ""
	I1115 11:50:54.499565  799612 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:50:54.518544  799612 out.go:203] 
	W1115 11:50:54.521508  799612 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 11:50:54.521532  799612 out.go:285] * 
	* 
	W1115 11:50:54.527944  799612 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 11:50:54.532525  799612 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-600818 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-600818
helpers_test.go:243: (dbg) docker inspect newest-cni-600818:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b",
	        "Created": "2025-11-15T11:49:52.920740445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 796392,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:50:32.258601781Z",
	            "FinishedAt": "2025-11-15T11:50:31.292684994Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/hostname",
	        "HostsPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/hosts",
	        "LogPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b-json.log",
	        "Name": "/newest-cni-600818",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-600818:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-600818",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b",
	                "LowerDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-600818",
	                "Source": "/var/lib/docker/volumes/newest-cni-600818/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-600818",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-600818",
	                "name.minikube.sigs.k8s.io": "newest-cni-600818",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cc2ec08f86bf2f282d024d18d649b5e7a87fcb4d4de59d3caa43fc58264dc12",
	            "SandboxKey": "/var/run/docker/netns/5cc2ec08f86b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-600818": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:ab:b7:09:c2:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3cd7ce9096f133c92aef6a7dc4fc2b918e8e85d34f96edb6bcf65eb55bcdc15",
	                    "EndpointID": "f2ae6ddbde5643d38444eca155d6e20a5ea74e63d2bc3bcf0a5f29a4cb17101f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-600818",
	                        "533b7ee97cf4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818: exit status 2 (521.667053ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-600818 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-600818 logs -n 25: (1.690981437s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:49 UTC │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p no-preload-126380 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-600818 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-600818 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p no-preload-126380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ image   │ newest-cni-600818 image list --format=json                                                                                                                                                                                                    │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ pause   │ -p newest-cni-600818 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:50:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:50:36.267151  797007 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:50:36.267356  797007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:50:36.267382  797007 out.go:374] Setting ErrFile to fd 2...
	I1115 11:50:36.267401  797007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:50:36.267666  797007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:50:36.268107  797007 out.go:368] Setting JSON to false
	I1115 11:50:36.269112  797007 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12787,"bootTime":1763194649,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:50:36.269207  797007 start.go:143] virtualization:  
	I1115 11:50:36.273947  797007 out.go:179] * [no-preload-126380] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:50:36.277205  797007 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:50:36.277284  797007 notify.go:221] Checking for updates...
	I1115 11:50:36.283904  797007 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:50:36.286838  797007 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:36.290388  797007 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:50:36.293313  797007 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:50:36.296164  797007 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:50:36.299607  797007 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:36.300151  797007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:50:36.331381  797007 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:50:36.331493  797007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:50:36.418354  797007 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:50:36.40556829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:50:36.418462  797007 docker.go:319] overlay module found
	I1115 11:50:36.421634  797007 out.go:179] * Using the docker driver based on existing profile
	I1115 11:50:36.424608  797007 start.go:309] selected driver: docker
	I1115 11:50:36.424631  797007 start.go:930] validating driver "docker" against &{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:36.424744  797007 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:50:36.425528  797007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:50:36.488567  797007 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:50:36.473029239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:50:36.488997  797007 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:50:36.489028  797007 cni.go:84] Creating CNI manager for ""
	I1115 11:50:36.489083  797007 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:36.489120  797007 start.go:353] cluster config:
	{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:36.492434  797007 out.go:179] * Starting "no-preload-126380" primary control-plane node in "no-preload-126380" cluster
	I1115 11:50:36.495297  797007 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:50:36.498262  797007 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:50:36.501207  797007 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:36.501361  797007 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:50:36.501678  797007 cache.go:107] acquiring lock: {Name:mk91726f44286832b0046d8499f5d58ff7ad2b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.501754  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 11:50:36.501767  797007 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.879µs
	I1115 11:50:36.501775  797007 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 11:50:36.501787  797007 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:50:36.501877  797007 cache.go:107] acquiring lock: {Name:mkb69d6ceae6b9540e167400909c918adeec9369 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.501919  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 11:50:36.501925  797007 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 57.773µs
	I1115 11:50:36.501931  797007 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 11:50:36.501942  797007 cache.go:107] acquiring lock: {Name:mk100238a706e702239a000cdfd80c281f376431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.501969  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 11:50:36.501974  797007 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 33.871µs
	I1115 11:50:36.501980  797007 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 11:50:36.501989  797007 cache.go:107] acquiring lock: {Name:mk15eeacf94b66be4392721a733df868bc784101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502015  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 11:50:36.502020  797007 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 31.385µs
	I1115 11:50:36.502025  797007 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 11:50:36.502038  797007 cache.go:107] acquiring lock: {Name:mkb04d459fbb71ba8df962665fc7ab481f00418b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502063  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 11:50:36.502069  797007 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 35.873µs
	I1115 11:50:36.502079  797007 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 11:50:36.502094  797007 cache.go:107] acquiring lock: {Name:mk87d816e36c32f87fd55930f6a9d59e6dfc4a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502120  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 11:50:36.502125  797007 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.677µs
	I1115 11:50:36.502130  797007 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 11:50:36.502139  797007 cache.go:107] acquiring lock: {Name:mk10696b84637583e56394b885fa921b6d221577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502165  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1115 11:50:36.502169  797007 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.066µs
	I1115 11:50:36.502175  797007 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 11:50:36.502184  797007 cache.go:107] acquiring lock: {Name:mkd034e18ce491e5f4eb3166d5f81cee9da0de03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502209  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 11:50:36.502214  797007 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.688µs
	I1115 11:50:36.502220  797007 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 11:50:36.502226  797007 cache.go:87] Successfully saved all images to host disk.
	I1115 11:50:36.521564  797007 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:50:36.521588  797007 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:50:36.521601  797007 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:50:36.521624  797007 start.go:360] acquireMachinesLock for no-preload-126380: {Name:mk5469ab80c2d37eee16becc95c7569af1cc4687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.521680  797007 start.go:364] duration metric: took 35.594µs to acquireMachinesLock for "no-preload-126380"
	I1115 11:50:36.521704  797007 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:50:36.521713  797007 fix.go:54] fixHost starting: 
	I1115 11:50:36.521972  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:36.543870  797007 fix.go:112] recreateIfNeeded on no-preload-126380: state=Stopped err=<nil>
	W1115 11:50:36.543904  797007 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:50:32.227533  796265 out.go:252] * Restarting existing docker container for "newest-cni-600818" ...
	I1115 11:50:32.227637  796265 cli_runner.go:164] Run: docker start newest-cni-600818
	I1115 11:50:32.456336  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:32.478967  796265 kic.go:430] container "newest-cni-600818" state is running.
	I1115 11:50:32.479406  796265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:50:32.500540  796265 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json ...
	I1115 11:50:32.500867  796265 machine.go:94] provisionDockerMachine start ...
	I1115 11:50:32.501196  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:32.525526  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:32.526315  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:32.526334  796265 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:50:32.527577  796265 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:50:35.692476  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:50:35.692503  796265 ubuntu.go:182] provisioning hostname "newest-cni-600818"
	I1115 11:50:35.692564  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:35.714592  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:35.715053  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:35.715070  796265 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-600818 && echo "newest-cni-600818" | sudo tee /etc/hostname
	I1115 11:50:35.918636  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:50:35.918712  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:35.951068  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:35.951372  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:35.951388  796265 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-600818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-600818/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-600818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:50:36.120881  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:50:36.120922  796265 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:50:36.120948  796265 ubuntu.go:190] setting up certificates
	I1115 11:50:36.120958  796265 provision.go:84] configureAuth start
	I1115 11:50:36.121024  796265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:50:36.148474  796265 provision.go:143] copyHostCerts
	I1115 11:50:36.148551  796265 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:50:36.148580  796265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:50:36.148662  796265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:50:36.148815  796265 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:50:36.148827  796265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:50:36.148884  796265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:50:36.149009  796265 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:50:36.149021  796265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:50:36.149055  796265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:50:36.149123  796265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.newest-cni-600818 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-600818]
	I1115 11:50:36.346553  796265 provision.go:177] copyRemoteCerts
	I1115 11:50:36.346645  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:50:36.346720  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:36.376292  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:36.482427  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:50:36.501810  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:50:36.530889  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:50:36.554280  796265 provision.go:87] duration metric: took 433.302933ms to configureAuth
	I1115 11:50:36.554308  796265 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:50:36.554516  796265 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:36.554633  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:36.589108  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:36.589527  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:36.589548  796265 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:50:36.968543  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:50:36.968568  796265 machine.go:97] duration metric: took 4.467688814s to provisionDockerMachine
	I1115 11:50:36.968579  796265 start.go:293] postStartSetup for "newest-cni-600818" (driver="docker")
	I1115 11:50:36.968590  796265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:50:36.968664  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:50:36.968717  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:36.989657  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.108094  796265 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:50:37.113147  796265 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:50:37.113178  796265 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:50:37.113189  796265 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:50:37.113242  796265 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:50:37.113343  796265 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:50:37.113453  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:50:37.124166  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:37.150767  796265 start.go:296] duration metric: took 182.172188ms for postStartSetup
	I1115 11:50:37.150861  796265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:50:37.150921  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:37.174384  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.286131  796265 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:50:37.292738  796265 fix.go:56] duration metric: took 5.085135741s for fixHost
	I1115 11:50:37.292761  796265 start.go:83] releasing machines lock for "newest-cni-600818", held for 5.085181961s
	I1115 11:50:37.292827  796265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:50:37.312052  796265 ssh_runner.go:195] Run: cat /version.json
	I1115 11:50:37.312117  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:37.313162  796265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:50:37.313298  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:37.363879  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.366935  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.492818  796265 ssh_runner.go:195] Run: systemctl --version
	I1115 11:50:37.593286  796265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:50:37.630016  796265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:50:37.634352  796265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:50:37.634428  796265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:50:37.642421  796265 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:50:37.642443  796265 start.go:496] detecting cgroup driver to use...
	I1115 11:50:37.642474  796265 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:50:37.642522  796265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:50:37.658308  796265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:50:37.671639  796265 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:50:37.671721  796265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:50:37.687635  796265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:50:37.701317  796265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:50:37.819670  796265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:50:37.948704  796265 docker.go:234] disabling docker service ...
	I1115 11:50:37.948816  796265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:50:37.964701  796265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:50:37.978029  796265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:50:38.108094  796265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:50:38.234612  796265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:50:38.248255  796265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:50:38.262267  796265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:50:38.262359  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.271086  796265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:50:38.271180  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.280096  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.289429  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.298948  796265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:50:38.307687  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.317892  796265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.333363  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.347110  796265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:50:38.360375  796265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:50:38.378297  796265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:38.561962  796265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:50:38.696660  796265 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:50:38.696777  796265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:50:38.700800  796265 start.go:564] Will wait 60s for crictl version
	I1115 11:50:38.700886  796265 ssh_runner.go:195] Run: which crictl
	I1115 11:50:38.704931  796265 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:50:38.732293  796265 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:50:38.732433  796265 ssh_runner.go:195] Run: crio --version
	I1115 11:50:38.760443  796265 ssh_runner.go:195] Run: crio --version
	I1115 11:50:38.794705  796265 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:50:38.797503  796265 cli_runner.go:164] Run: docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:50:38.818619  796265 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:50:38.822531  796265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:38.835080  796265 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 11:50:38.837852  796265 kubeadm.go:884] updating cluster {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:50:38.838007  796265 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:38.838090  796265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:38.870465  796265 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:38.870489  796265 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:50:38.870550  796265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:38.895131  796265 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:38.895151  796265 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:50:38.895159  796265 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:50:38.895254  796265 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-600818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:50:38.895336  796265 ssh_runner.go:195] Run: crio config
	I1115 11:50:38.950969  796265 cni.go:84] Creating CNI manager for ""
	I1115 11:50:38.950995  796265 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:38.951014  796265 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 11:50:38.951082  796265 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-600818 NodeName:newest-cni-600818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:50:38.951247  796265 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-600818"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:50:38.951339  796265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:50:38.959018  796265 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:50:38.959139  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:50:38.966640  796265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:50:38.979146  796265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:50:38.991649  796265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1115 11:50:39.007656  796265 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:50:39.011938  796265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:39.022395  796265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:39.151080  796265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:39.166685  796265 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818 for IP: 192.168.76.2
	I1115 11:50:39.166703  796265 certs.go:195] generating shared ca certs ...
	I1115 11:50:39.166719  796265 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:39.166855  796265 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:50:39.166894  796265 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:50:39.166901  796265 certs.go:257] generating profile certs ...
	I1115 11:50:39.166988  796265 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key
	I1115 11:50:39.167055  796265 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42
	I1115 11:50:39.167202  796265 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key
	I1115 11:50:39.167355  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:50:39.167424  796265 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:50:39.167448  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:50:39.167514  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:50:39.167570  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:50:39.167625  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:50:39.167697  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:39.168361  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:50:39.187624  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:50:39.205678  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:50:39.223440  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:50:39.241495  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:50:39.260942  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:50:39.279014  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:50:39.297412  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 11:50:39.315462  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:50:39.336542  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:50:39.357969  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:50:39.378742  796265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:50:39.406212  796265 ssh_runner.go:195] Run: openssl version
	I1115 11:50:39.413922  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:50:39.423263  796265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:50:39.427289  796265 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:50:39.427373  796265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:50:39.468426  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:50:39.476478  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:50:39.484515  796265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:50:39.488344  796265 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:50:39.488411  796265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:50:39.532050  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:50:39.540064  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:50:39.548449  796265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:39.552191  796265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:39.552262  796265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:39.593210  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:50:39.601498  796265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:50:39.605549  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:50:39.646868  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:50:39.688052  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:50:39.729509  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:50:39.771171  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:50:39.812662  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:50:39.861313  796265 kubeadm.go:401] StartCluster: {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:39.861467  796265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:50:39.861566  796265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:50:39.948906  796265 cri.go:89] found id: ""
	I1115 11:50:39.949004  796265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:50:39.965393  796265 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:50:39.965471  796265 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:50:39.965551  796265 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:50:39.991818  796265 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:50:39.992285  796265 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-600818" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:39.992520  796265 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-600818" cluster setting kubeconfig missing "newest-cni-600818" context setting]
	I1115 11:50:39.992893  796265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:39.994475  796265 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:50:40.035141  796265 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 11:50:40.035179  796265 kubeadm.go:602] duration metric: took 69.694251ms to restartPrimaryControlPlane
	I1115 11:50:40.035222  796265 kubeadm.go:403] duration metric: took 173.93672ms to StartCluster
	I1115 11:50:40.035240  796265 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:40.035335  796265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:40.036203  796265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:40.036527  796265 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:50:40.037140  796265 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:50:40.037236  796265 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-600818"
	I1115 11:50:40.037273  796265 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-600818"
	W1115 11:50:40.037284  796265 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:50:40.037312  796265 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:40.038125  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.038510  796265 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:40.038742  796265 addons.go:70] Setting dashboard=true in profile "newest-cni-600818"
	I1115 11:50:40.038765  796265 addons.go:239] Setting addon dashboard=true in "newest-cni-600818"
	W1115 11:50:40.038772  796265 addons.go:248] addon dashboard should already be in state true
	I1115 11:50:40.038809  796265 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:40.039246  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.040315  796265 addons.go:70] Setting default-storageclass=true in profile "newest-cni-600818"
	I1115 11:50:40.040396  796265 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-600818"
	I1115 11:50:40.040795  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.044005  796265 out.go:179] * Verifying Kubernetes components...
	I1115 11:50:40.056238  796265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:40.119578  796265 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:50:40.119675  796265 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 11:50:40.123856  796265 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:40.123886  796265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:50:40.123965  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:40.128698  796265 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:50:36.547322  797007 out.go:252] * Restarting existing docker container for "no-preload-126380" ...
	I1115 11:50:36.547402  797007 cli_runner.go:164] Run: docker start no-preload-126380
	I1115 11:50:36.829683  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:36.857025  797007 kic.go:430] container "no-preload-126380" state is running.
	I1115 11:50:36.857412  797007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:50:36.889091  797007 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:50:36.889332  797007 machine.go:94] provisionDockerMachine start ...
	I1115 11:50:36.889400  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:36.915214  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:36.915529  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:36.915544  797007 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:50:36.917498  797007 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:50:40.119377  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:50:40.119400  797007 ubuntu.go:182] provisioning hostname "no-preload-126380"
	I1115 11:50:40.119470  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:40.195305  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:40.195625  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:40.195637  797007 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-126380 && echo "no-preload-126380" | sudo tee /etc/hostname
	I1115 11:50:40.426167  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:50:40.426319  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:40.458743  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:40.459049  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:40.459066  797007 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-126380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-126380/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-126380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:50:40.649776  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:50:40.649821  797007 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:50:40.649852  797007 ubuntu.go:190] setting up certificates
	I1115 11:50:40.649861  797007 provision.go:84] configureAuth start
	I1115 11:50:40.649928  797007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:50:40.675359  797007 provision.go:143] copyHostCerts
	I1115 11:50:40.675427  797007 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:50:40.675445  797007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:50:40.675528  797007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:50:40.675627  797007 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:50:40.675638  797007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:50:40.675665  797007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:50:40.675721  797007 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:50:40.675730  797007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:50:40.675754  797007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:50:40.675803  797007 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.no-preload-126380 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-126380]
	I1115 11:50:41.086185  797007 provision.go:177] copyRemoteCerts
	I1115 11:50:41.086276  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:50:41.086326  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:41.106402  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:41.223102  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:50:41.258109  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:50:40.133004  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:50:40.133033  796265 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:50:40.133113  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:40.135326  796265 addons.go:239] Setting addon default-storageclass=true in "newest-cni-600818"
	W1115 11:50:40.135344  796265 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:50:40.135369  796265 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:40.135785  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.226109  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:40.230998  796265 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:40.231021  796265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:50:40.231084  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:40.233268  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:40.269744  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:40.489148  796265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:40.622502  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:50:40.622526  796265 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:50:40.652126  796265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:40.753619  796265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:40.769386  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:50:40.769423  796265 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:50:40.934252  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:50:40.934275  796265 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:50:41.038077  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:50:41.038101  796265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:50:41.113283  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:50:41.113312  796265 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:50:41.145173  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:50:41.145198  796265 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:50:41.181056  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:50:41.181080  796265 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:50:41.205992  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:50:41.206017  796265 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:50:41.231640  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:50:41.231677  796265 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:50:41.258461  796265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:50:41.287584  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:50:41.330322  797007 provision.go:87] duration metric: took 680.441576ms to configureAuth
	I1115 11:50:41.330347  797007 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:50:41.330538  797007 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:41.330643  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:41.382324  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:41.382636  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:41.382650  797007 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:50:41.849362  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:50:41.849459  797007 machine.go:97] duration metric: took 4.96010913s to provisionDockerMachine
	I1115 11:50:41.849491  797007 start.go:293] postStartSetup for "no-preload-126380" (driver="docker")
	I1115 11:50:41.849534  797007 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:50:41.849621  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:50:41.849698  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:41.894448  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.017409  797007 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:50:42.022660  797007 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:50:42.022688  797007 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:50:42.022708  797007 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:50:42.022768  797007 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:50:42.022846  797007 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:50:42.022948  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:50:42.034335  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:42.074669  797007 start.go:296] duration metric: took 225.12941ms for postStartSetup
	I1115 11:50:42.074800  797007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:50:42.074934  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:42.116026  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.257459  797007 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:50:42.267725  797007 fix.go:56] duration metric: took 5.746003726s for fixHost
	I1115 11:50:42.267754  797007 start.go:83] releasing machines lock for "no-preload-126380", held for 5.746060219s
	I1115 11:50:42.267848  797007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:50:42.293821  797007 ssh_runner.go:195] Run: cat /version.json
	I1115 11:50:42.293891  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:42.294131  797007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:50:42.294195  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:42.325071  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.341061  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.464613  797007 ssh_runner.go:195] Run: systemctl --version
	I1115 11:50:42.595464  797007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:50:42.652188  797007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:50:42.661924  797007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:50:42.662019  797007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:50:42.675999  797007 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:50:42.676022  797007 start.go:496] detecting cgroup driver to use...
	I1115 11:50:42.676076  797007 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:50:42.676143  797007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:50:42.705099  797007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:50:42.726900  797007 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:50:42.727019  797007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:50:42.749896  797007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:50:42.781444  797007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:50:42.980447  797007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:50:43.194580  797007 docker.go:234] disabling docker service ...
	I1115 11:50:43.194728  797007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:50:43.226697  797007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:50:43.250985  797007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:50:43.478864  797007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:50:43.698595  797007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:50:43.712931  797007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:50:43.731691  797007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:50:43.731816  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.749429  797007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:50:43.749497  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.764286  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.780657  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.796600  797007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:50:43.809563  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.820456  797007 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.833450  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.846236  797007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:50:43.857847  797007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:50:43.871768  797007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:44.055554  797007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:50:44.253250  797007 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:50:44.253390  797007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:50:44.259434  797007 start.go:564] Will wait 60s for crictl version
	I1115 11:50:44.259552  797007 ssh_runner.go:195] Run: which crictl
	I1115 11:50:44.269756  797007 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:50:44.320757  797007 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:50:44.323425  797007 ssh_runner.go:195] Run: crio --version
	I1115 11:50:44.393779  797007 ssh_runner.go:195] Run: crio --version
	I1115 11:50:44.459161  797007 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:50:44.462151  797007 cli_runner.go:164] Run: docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:50:44.487121  797007 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:50:44.492214  797007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:44.514364  797007 kubeadm.go:884] updating cluster {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:50:44.514479  797007 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:44.514520  797007 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:44.590628  797007 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:44.590649  797007 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:50:44.590656  797007 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 11:50:44.590754  797007 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-126380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:50:44.590833  797007 ssh_runner.go:195] Run: crio config
	I1115 11:50:44.667923  797007 cni.go:84] Creating CNI manager for ""
	I1115 11:50:44.667949  797007 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:44.667996  797007 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:50:44.668027  797007 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-126380 NodeName:no-preload-126380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:50:44.668198  797007 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-126380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:50:44.668286  797007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:50:44.678363  797007 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:50:44.678456  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:50:44.691102  797007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:50:44.714008  797007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:50:44.737390  797007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 11:50:44.761512  797007 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:50:44.768433  797007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:44.780041  797007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:44.983687  797007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:45.001226  797007 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380 for IP: 192.168.85.2
	I1115 11:50:45.001251  797007 certs.go:195] generating shared ca certs ...
	I1115 11:50:45.001283  797007 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:45.001527  797007 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:50:45.001585  797007 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:50:45.001594  797007 certs.go:257] generating profile certs ...
	I1115 11:50:45.001696  797007 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key
	I1115 11:50:45.001766  797007 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb
	I1115 11:50:45.001809  797007 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key
	I1115 11:50:45.001932  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:50:45.001966  797007 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:50:45.001977  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:50:45.002002  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:50:45.002025  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:50:45.002047  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:50:45.002090  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:45.002743  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:50:45.108098  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:50:45.160411  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:50:45.207988  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:50:45.279794  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:50:45.378346  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:50:45.441575  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:50:45.485380  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:50:45.518471  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:50:45.552077  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:50:45.590104  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:50:45.629953  797007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:50:45.654237  797007 ssh_runner.go:195] Run: openssl version
	I1115 11:50:45.666630  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:50:45.679967  797007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:50:45.684180  797007 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:50:45.684287  797007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:50:45.731731  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:50:45.739584  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:50:45.748123  797007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:45.752345  797007 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:45.752470  797007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:45.794478  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:50:45.802793  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:50:45.811527  797007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:50:45.820350  797007 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:50:45.820473  797007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:50:45.866318  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:50:45.874800  797007 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:50:45.879492  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:50:45.944532  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:50:46.035899  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:50:46.115428  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:50:46.191551  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:50:46.322813  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:50:46.462645  797007 kubeadm.go:401] StartCluster: {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:46.462789  797007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:50:46.462888  797007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:50:46.596455  797007 cri.go:89] found id: "ff27b73ca8f1765a9b5e411c5c5a50ecdc283b3f9ac1d25c020e18cc04187039"
	I1115 11:50:46.596527  797007 cri.go:89] found id: "16ac7fdb8e9ed235613c8255c801b9a65efe815d89103579d0f55fa48408628f"
	I1115 11:50:46.596547  797007 cri.go:89] found id: "ab769dc54851c40c74b065b75a3f67d4f8d0132a1f1e065c9daa886d8665fdc7"
	I1115 11:50:46.596566  797007 cri.go:89] found id: "57c368e28f36eee195d648e761727c0670d2cfaa223fa5be99062e847379937c"
	I1115 11:50:46.596584  797007 cri.go:89] found id: ""
	I1115 11:50:46.596662  797007 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:50:46.648397  797007 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:46Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:50:46.648533  797007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:50:46.666704  797007 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:50:46.666776  797007 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:50:46.666855  797007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:50:46.693517  797007 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:50:46.694250  797007 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-126380" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:46.694571  797007 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-126380" cluster setting kubeconfig missing "no-preload-126380" context setting]
	I1115 11:50:46.695148  797007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:46.697112  797007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:50:46.714303  797007 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 11:50:46.714387  797007 kubeadm.go:602] duration metric: took 47.589272ms to restartPrimaryControlPlane
	I1115 11:50:46.714411  797007 kubeadm.go:403] duration metric: took 251.774665ms to StartCluster
	I1115 11:50:46.714453  797007 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:46.714545  797007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:46.715546  797007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:46.715813  797007 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:50:46.716167  797007 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:50:46.716243  797007 addons.go:70] Setting storage-provisioner=true in profile "no-preload-126380"
	I1115 11:50:46.716258  797007 addons.go:239] Setting addon storage-provisioner=true in "no-preload-126380"
	W1115 11:50:46.716263  797007 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:50:46.716284  797007 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:50:46.717180  797007 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:46.717333  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.717501  797007 addons.go:70] Setting dashboard=true in profile "no-preload-126380"
	I1115 11:50:46.717547  797007 addons.go:239] Setting addon dashboard=true in "no-preload-126380"
	W1115 11:50:46.717567  797007 addons.go:248] addon dashboard should already be in state true
	I1115 11:50:46.717602  797007 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:50:46.717724  797007 addons.go:70] Setting default-storageclass=true in profile "no-preload-126380"
	I1115 11:50:46.717737  797007 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-126380"
	I1115 11:50:46.718047  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.718578  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.720783  797007 out.go:179] * Verifying Kubernetes components...
	I1115 11:50:46.725934  797007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:46.778412  797007 addons.go:239] Setting addon default-storageclass=true in "no-preload-126380"
	W1115 11:50:46.778552  797007 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:50:46.778581  797007 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:50:46.779001  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.780223  797007 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:50:46.783291  797007 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:46.783323  797007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:50:46.783395  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:46.803955  797007 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 11:50:46.806954  797007 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:50:50.422560  796265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.933378785s)
	I1115 11:50:50.422618  796265 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.770467034s)
	I1115 11:50:50.422652  796265 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:50:50.422706  796265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:50:50.422790  796265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.669146081s)
	I1115 11:50:50.631482  796265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.372980484s)
	I1115 11:50:50.631809  796265 api_server.go:72] duration metric: took 10.595231625s to wait for apiserver process to appear ...
	I1115 11:50:50.631864  796265 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:50:50.631895  796265 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:50:50.634808  796265 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-600818 addons enable metrics-server
	
	I1115 11:50:50.637678  796265 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 11:50:50.640550  796265 addons.go:515] duration metric: took 10.603402606s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 11:50:50.664565  796265 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 11:50:50.667042  796265 api_server.go:141] control plane version: v1.34.1
	I1115 11:50:50.667065  796265 api_server.go:131] duration metric: took 35.18234ms to wait for apiserver health ...
	I1115 11:50:50.667075  796265 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:50:50.670708  796265 system_pods.go:59] 8 kube-system pods found
	I1115 11:50:50.670787  796265 system_pods.go:61] "coredns-66bc5c9577-k2pmf" [6eb5cbde-f6a1-4680-ac07-4a2b6e15d42f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 11:50:50.670814  796265 system_pods.go:61] "etcd-newest-cni-600818" [32466f92-ecfd-446f-bfe9-68cf519b2b89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:50:50.670857  796265 system_pods.go:61] "kindnet-bcvw7" [75bd6a1d-29ff-4420-982f-97b36c4b5830] Running
	I1115 11:50:50.670883  796265 system_pods.go:61] "kube-apiserver-newest-cni-600818" [443d9983-0c4e-4303-89ec-1a6e18c316ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:50:50.670905  796265 system_pods.go:61] "kube-controller-manager-newest-cni-600818" [b43750ab-bb60-4d03-8054-ddcd38bc1c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:50:50.670940  796265 system_pods.go:61] "kube-proxy-kms5c" [2446e186-b744-4098-b190-0a98b30804fd] Running
	I1115 11:50:50.670966  796265 system_pods.go:61] "kube-scheduler-newest-cni-600818" [be75d8e9-f0e3-419b-85a5-702fd1fc2975] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:50:50.670990  796265 system_pods.go:61] "storage-provisioner" [070b587d-9d48-4f2a-9b68-11cc8e004b8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 11:50:50.671025  796265 system_pods.go:74] duration metric: took 3.943934ms to wait for pod list to return data ...
	I1115 11:50:50.671052  796265 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:50:50.690075  796265 default_sa.go:45] found service account: "default"
	I1115 11:50:50.690151  796265 default_sa.go:55] duration metric: took 19.076367ms for default service account to be created ...
	I1115 11:50:50.690178  796265 kubeadm.go:587] duration metric: took 10.653602211s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 11:50:50.690223  796265 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:50:50.748879  796265 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:50:50.748976  796265 node_conditions.go:123] node cpu capacity is 2
	I1115 11:50:50.749003  796265 node_conditions.go:105] duration metric: took 58.758767ms to run NodePressure ...
	I1115 11:50:50.749032  796265 start.go:242] waiting for startup goroutines ...
	I1115 11:50:50.749072  796265 start.go:247] waiting for cluster config update ...
	I1115 11:50:50.749096  796265 start.go:256] writing updated cluster config ...
	I1115 11:50:50.749475  796265 ssh_runner.go:195] Run: rm -f paused
	I1115 11:50:50.863041  796265 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:50:50.868073  796265 out.go:179] * Done! kubectl is now configured to use "newest-cni-600818" cluster and "default" namespace by default
	I1115 11:50:46.809750  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:50:46.809776  797007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:50:46.809844  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:46.829294  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:46.832703  797007 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:46.832724  797007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:50:46.832787  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:46.865069  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:46.883210  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:47.243601  797007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:47.329722  797007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:47.336350  797007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:47.341362  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:50:47.341434  797007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:50:47.542798  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:50:47.542871  797007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:50:47.637197  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:50:47.637279  797007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:50:47.788987  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:50:47.789061  797007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:50:47.893817  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:50:47.893891  797007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:50:47.924175  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:50:47.924248  797007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:50:47.962364  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:50:47.962444  797007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:50:48.011004  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:50:48.011093  797007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:50:48.055159  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:50:48.055242  797007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:50:48.109894  797007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.952072419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.954472592Z" level=info msg="Running pod sandbox: kube-system/kindnet-bcvw7/POD" id=a0ca9fba-576d-4c23-906a-64c14cf16599 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.954527181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.963586057Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e08335e8-6b49-436f-a017-251f3bdf3bc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.977129402Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a0ca9fba-576d-4c23-906a-64c14cf16599 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.01489915Z" level=info msg="Ran pod sandbox 84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43 with infra container: kube-system/kindnet-bcvw7/POD" id=a0ca9fba-576d-4c23-906a-64c14cf16599 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.023563503Z" level=info msg="Ran pod sandbox b1d073351984372a7dbc5f0709fcb167a8a76e2776bdf6b35b593768999ae290 with infra container: kube-system/kube-proxy-kms5c/POD" id=e08335e8-6b49-436f-a017-251f3bdf3bc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.034796507Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b58525da-1cf5-461b-bdd3-00d247c26945 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.048393465Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4fe819a9-e4bf-4ab3-970b-807bbfa030a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.066010486Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3249c102-4e90-4da8-b5f2-a45d40a61092 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.066796758Z" level=info msg="Creating container: kube-system/kindnet-bcvw7/kindnet-cni" id=19612d60-5c18-40ad-b379-17016619604a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.06698826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.082231594Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=60ed5b1c-9657-41e5-9276-b79b96e37b97 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.099846933Z" level=info msg="Creating container: kube-system/kube-proxy-kms5c/kube-proxy" id=f0db0400-e147-4538-9187-9b694b764568 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.100138751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.116325095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.125942418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.127633192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.128222647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.230001029Z" level=info msg="Created container 508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb: kube-system/kindnet-bcvw7/kindnet-cni" id=19612d60-5c18-40ad-b379-17016619604a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.233110912Z" level=info msg="Starting container: 508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb" id=30f29745-2f2e-41bc-a66d-7aa039dd7809 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.240610921Z" level=info msg="Created container 6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755: kube-system/kube-proxy-kms5c/kube-proxy" id=f0db0400-e147-4538-9187-9b694b764568 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.241569774Z" level=info msg="Starting container: 6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755" id=93603d69-9639-4aac-a851-29f52b1608a3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.246703571Z" level=info msg="Started container" PID=1056 containerID=508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb description=kube-system/kindnet-bcvw7/kindnet-cni id=30f29745-2f2e-41bc-a66d-7aa039dd7809 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.261140554Z" level=info msg="Started container" PID=1054 containerID=6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755 description=kube-system/kube-proxy-kms5c/kube-proxy id=93603d69-9639-4aac-a851-29f52b1608a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1d073351984372a7dbc5f0709fcb167a8a76e2776bdf6b35b593768999ae290
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	508191357d8c4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   84396a8c660d9       kindnet-bcvw7                               kube-system
	6b6b51789b97b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   b1d0733519843       kube-proxy-kms5c                            kube-system
	645246f782520       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   d63ba10b69c5c       kube-scheduler-newest-cni-600818            kube-system
	fd7399c25f9e0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   1fd1e55f31e4b       kube-controller-manager-newest-cni-600818   kube-system
	11203d5b5b356       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   e3aad0da4358e       kube-apiserver-newest-cni-600818            kube-system
	80865ff5e22d4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   39e6287d4fcdd       etcd-newest-cni-600818                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-600818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-600818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=newest-cni-600818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_50_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:50:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-600818
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:50:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-600818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c022f560-be97-45fe-81fb-2d2f59506bb6
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-600818                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-bcvw7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-600818             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-600818    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-kms5c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-600818             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-600818 event: Registered Node newest-cni-600818 in Controller
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-600818 event: Registered Node newest-cni-600818 in Controller
	
	
	==> dmesg <==
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	[Nov15 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.578289] overlayfs: idmapped layers are currently not supported
	[  +6.063974] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [80865ff5e22d408a46025735f288fbc8807cecdd6680ae8eadc50da5c41cd3e6] <==
	{"level":"warn","ts":"2025-11-15T11:50:44.034245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.073119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.117646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.172176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.210713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.247134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.287271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.332589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.352971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.402305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.435767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.473232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.509341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.542108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.591987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.605095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.631498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.654660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.682181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.704278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.736429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.758606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.792649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.806435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:45.002610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:50:56 up  3:33,  0 user,  load average: 5.01, 3.68, 3.03
	Linux newest-cni-600818 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb] <==
	I1115 11:50:48.399334       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:50:48.425090       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 11:50:48.425226       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:50:48.425245       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:50:48.425266       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:50:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:50:48.611609       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:50:48.611628       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:50:48.611637       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:50:48.611914       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [11203d5b5b35660740cae26a3b2082fe96faeca680bc0c57a5eb2ba26511cba1] <==
	I1115 11:50:47.348248       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:50:47.443402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:50:47.449030       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 11:50:47.449063       1 policy_source.go:240] refreshing policies
	I1115 11:50:47.449250       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:50:47.449329       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:50:47.450891       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 11:50:47.451633       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:50:47.451670       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:50:47.451677       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:50:47.477007       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:50:47.477101       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:50:47.497884       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 11:50:47.499902       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:50:47.560147       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:50:49.767019       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:50:50.048731       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:50:50.227403       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:50:50.291756       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:50:50.587093       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.185.188"}
	I1115 11:50:50.621325       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.232.154"}
	I1115 11:50:52.069272       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:50:52.200918       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:50:52.339336       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:50:52.423032       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [fd7399c25f9e0b5ec2bd454e0007c03228ee3b5f4d4bf00dc22c645038b07897] <==
	I1115 11:50:51.893299       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:50:51.942769       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"newest-cni-600818\" does not exist"
	I1115 11:50:51.976902       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:50:51.960672       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 11:50:51.960695       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:50:51.964277       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:50:51.986089       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:50:51.967213       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:50:51.986489       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:50:51.986508       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:50:51.986528       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 11:50:51.995509       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:50:52.005761       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:50:52.005876       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 11:50:52.012982       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 11:50:52.013101       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-600818"
	I1115 11:50:52.013179       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:50:52.005892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:50:52.014791       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:50:52.026269       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:50:52.026353       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:50:52.026389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:50:52.026468       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:50:52.033788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:50:52.034346       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755] <==
	I1115 11:50:49.657640       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:50:50.014396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:50:50.614521       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:50:50.647108       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 11:50:50.647332       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:50:51.220473       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:50:51.220590       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:50:51.231917       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:50:51.232298       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:50:51.232363       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:51.233945       1 config.go:200] "Starting service config controller"
	I1115 11:50:51.240906       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:50:51.240986       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:50:51.241022       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:50:51.241066       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:50:51.241106       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:50:51.241872       1 config.go:309] "Starting node config controller"
	I1115 11:50:51.241943       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:50:51.241975       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:50:51.341785       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:50:51.341890       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:50:51.341960       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [645246f7825202338380cb5d10ceb9da92cdfc53e1f942510d2442a0fd84a097] <==
	I1115 11:50:43.758708       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:50:50.231487       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 11:50:50.232306       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:50.260810       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 11:50:50.260928       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 11:50:50.261007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:50.261043       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:50.261086       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:50:50.261129       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:50:50.262558       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:50:50.262667       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:50:50.365083       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 11:50:50.365314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:50.366091       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:50:42 newest-cni-600818 kubelet[729]: E1115 11:50:42.961235     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-600818\" not found" node="newest-cni-600818"
	Nov 15 11:50:46 newest-cni-600818 kubelet[729]: I1115 11:50:46.419289     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.311349     729 apiserver.go:52] "Watching apiserver"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.417829     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506433     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-cni-cfg\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506482     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-xtables-lock\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506505     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2446e186-b744-4098-b190-0a98b30804fd-xtables-lock\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506523     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-lib-modules\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506566     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2446e186-b744-4098-b190-0a98b30804fd-lib-modules\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.638365     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-600818\" already exists" pod="kube-system/kube-scheduler-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.638413     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.685044     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.735644     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-600818\" already exists" pod="kube-system/etcd-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.735681     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.797284     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.797389     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.797421     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.798655     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.814964     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-600818\" already exists" pod="kube-system/kube-apiserver-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.815008     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.891738     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-600818\" already exists" pod="kube-system/kube-controller-manager-newest-cni-600818"
	Nov 15 11:50:48 newest-cni-600818 kubelet[729]: W1115 11:50:48.009456     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/crio-84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43 WatchSource:0}: Error finding container 84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43: Status 404 returned error can't find the container with id 84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43
	Nov 15 11:50:52 newest-cni-600818 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:50:52 newest-cni-600818 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:50:52 newest-cni-600818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-600818 -n newest-cni-600818
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-600818 -n newest-cni-600818: exit status 2 (527.623558ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-600818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d: exit status 1 (106.226886ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-k2pmf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nlkr7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rgp9d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-600818
helpers_test.go:243: (dbg) docker inspect newest-cni-600818:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b",
	        "Created": "2025-11-15T11:49:52.920740445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 796392,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:50:32.258601781Z",
	            "FinishedAt": "2025-11-15T11:50:31.292684994Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/hostname",
	        "HostsPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/hosts",
	        "LogPath": "/var/lib/docker/containers/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b-json.log",
	        "Name": "/newest-cni-600818",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-600818:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-600818",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b",
	                "LowerDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b840733d5eb5568a1b1a5e0e7404ea4d320669e261fcc419b6f5be4f5457db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-600818",
	                "Source": "/var/lib/docker/volumes/newest-cni-600818/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-600818",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-600818",
	                "name.minikube.sigs.k8s.io": "newest-cni-600818",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cc2ec08f86bf2f282d024d18d649b5e7a87fcb4d4de59d3caa43fc58264dc12",
	            "SandboxKey": "/var/run/docker/netns/5cc2ec08f86b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-600818": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:ab:b7:09:c2:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3cd7ce9096f133c92aef6a7dc4fc2b918e8e85d34f96edb6bcf65eb55bcdc15",
	                    "EndpointID": "f2ae6ddbde5643d38444eca155d6e20a5ea74e63d2bc3bcf0a5f29a4cb17101f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-600818",
	                        "533b7ee97cf4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818: exit status 2 (400.706587ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-600818 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-600818 logs -n 25: (1.223089975s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-404149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ stop    │ -p embed-certs-404149 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ addons  │ enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ start   │ -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:49 UTC │
	│ image   │ default-k8s-diff-port-769461 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │ 15 Nov 25 11:48 UTC │
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p no-preload-126380 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-600818 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-600818 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p no-preload-126380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ image   │ newest-cni-600818 image list --format=json                                                                                                                                                                                                    │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ pause   │ -p newest-cni-600818 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:50:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:50:36.267151  797007 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:50:36.267356  797007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:50:36.267382  797007 out.go:374] Setting ErrFile to fd 2...
	I1115 11:50:36.267401  797007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:50:36.267666  797007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:50:36.268107  797007 out.go:368] Setting JSON to false
	I1115 11:50:36.269112  797007 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12787,"bootTime":1763194649,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:50:36.269207  797007 start.go:143] virtualization:  
	I1115 11:50:36.273947  797007 out.go:179] * [no-preload-126380] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:50:36.277205  797007 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:50:36.277284  797007 notify.go:221] Checking for updates...
	I1115 11:50:36.283904  797007 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:50:36.286838  797007 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:36.290388  797007 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:50:36.293313  797007 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:50:36.296164  797007 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:50:36.299607  797007 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:36.300151  797007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:50:36.331381  797007 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:50:36.331493  797007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:50:36.418354  797007 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:50:36.40556829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:50:36.418462  797007 docker.go:319] overlay module found
	I1115 11:50:36.421634  797007 out.go:179] * Using the docker driver based on existing profile
	I1115 11:50:36.424608  797007 start.go:309] selected driver: docker
	I1115 11:50:36.424631  797007 start.go:930] validating driver "docker" against &{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:36.424744  797007 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:50:36.425528  797007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:50:36.488567  797007 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:50:36.473029239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:50:36.488997  797007 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:50:36.489028  797007 cni.go:84] Creating CNI manager for ""
	I1115 11:50:36.489083  797007 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:36.489120  797007 start.go:353] cluster config:
	{Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:36.492434  797007 out.go:179] * Starting "no-preload-126380" primary control-plane node in "no-preload-126380" cluster
	I1115 11:50:36.495297  797007 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:50:36.498262  797007 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:50:36.501207  797007 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:36.501361  797007 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:50:36.501678  797007 cache.go:107] acquiring lock: {Name:mk91726f44286832b0046d8499f5d58ff7ad2b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.501754  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 11:50:36.501767  797007 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.879µs
	I1115 11:50:36.501775  797007 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 11:50:36.501787  797007 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:50:36.501877  797007 cache.go:107] acquiring lock: {Name:mkb69d6ceae6b9540e167400909c918adeec9369 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.501919  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 11:50:36.501925  797007 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 57.773µs
	I1115 11:50:36.501931  797007 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 11:50:36.501942  797007 cache.go:107] acquiring lock: {Name:mk100238a706e702239a000cdfd80c281f376431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.501969  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 11:50:36.501974  797007 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 33.871µs
	I1115 11:50:36.501980  797007 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 11:50:36.501989  797007 cache.go:107] acquiring lock: {Name:mk15eeacf94b66be4392721a733df868bc784101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502015  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 11:50:36.502020  797007 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 31.385µs
	I1115 11:50:36.502025  797007 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 11:50:36.502038  797007 cache.go:107] acquiring lock: {Name:mkb04d459fbb71ba8df962665fc7ab481f00418b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502063  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 11:50:36.502069  797007 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 35.873µs
	I1115 11:50:36.502079  797007 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 11:50:36.502094  797007 cache.go:107] acquiring lock: {Name:mk87d816e36c32f87fd55930f6a9d59e6dfc4a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502120  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 11:50:36.502125  797007 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.677µs
	I1115 11:50:36.502130  797007 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 11:50:36.502139  797007 cache.go:107] acquiring lock: {Name:mk10696b84637583e56394b885fa921b6d221577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502165  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1115 11:50:36.502169  797007 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.066µs
	I1115 11:50:36.502175  797007 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 11:50:36.502184  797007 cache.go:107] acquiring lock: {Name:mkd034e18ce491e5f4eb3166d5f81cee9da0de03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.502209  797007 cache.go:115] /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 11:50:36.502214  797007 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.688µs
	I1115 11:50:36.502220  797007 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 11:50:36.502226  797007 cache.go:87] Successfully saved all images to host disk.
	I1115 11:50:36.521564  797007 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:50:36.521588  797007 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:50:36.521601  797007 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:50:36.521624  797007 start.go:360] acquireMachinesLock for no-preload-126380: {Name:mk5469ab80c2d37eee16becc95c7569af1cc4687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:50:36.521680  797007 start.go:364] duration metric: took 35.594µs to acquireMachinesLock for "no-preload-126380"
	I1115 11:50:36.521704  797007 start.go:96] Skipping create...Using existing machine configuration
	I1115 11:50:36.521713  797007 fix.go:54] fixHost starting: 
	I1115 11:50:36.521972  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:36.543870  797007 fix.go:112] recreateIfNeeded on no-preload-126380: state=Stopped err=<nil>
	W1115 11:50:36.543904  797007 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 11:50:32.227533  796265 out.go:252] * Restarting existing docker container for "newest-cni-600818" ...
	I1115 11:50:32.227637  796265 cli_runner.go:164] Run: docker start newest-cni-600818
	I1115 11:50:32.456336  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:32.478967  796265 kic.go:430] container "newest-cni-600818" state is running.
	I1115 11:50:32.479406  796265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:50:32.500540  796265 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/config.json ...
	I1115 11:50:32.500867  796265 machine.go:94] provisionDockerMachine start ...
	I1115 11:50:32.501196  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:32.525526  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:32.526315  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:32.526334  796265 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:50:32.527577  796265 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:50:35.692476  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:50:35.692503  796265 ubuntu.go:182] provisioning hostname "newest-cni-600818"
	I1115 11:50:35.692564  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:35.714592  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:35.715053  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:35.715070  796265 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-600818 && echo "newest-cni-600818" | sudo tee /etc/hostname
	I1115 11:50:35.918636  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-600818
	
	I1115 11:50:35.918712  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:35.951068  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:35.951372  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:35.951388  796265 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-600818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-600818/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-600818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:50:36.120881  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:50:36.120922  796265 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:50:36.120948  796265 ubuntu.go:190] setting up certificates
	I1115 11:50:36.120958  796265 provision.go:84] configureAuth start
	I1115 11:50:36.121024  796265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:50:36.148474  796265 provision.go:143] copyHostCerts
	I1115 11:50:36.148551  796265 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:50:36.148580  796265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:50:36.148662  796265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:50:36.148815  796265 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:50:36.148827  796265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:50:36.148884  796265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:50:36.149009  796265 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:50:36.149021  796265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:50:36.149055  796265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:50:36.149123  796265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.newest-cni-600818 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-600818]
	I1115 11:50:36.346553  796265 provision.go:177] copyRemoteCerts
	I1115 11:50:36.346645  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:50:36.346720  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:36.376292  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:36.482427  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:50:36.501810  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:50:36.530889  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:50:36.554280  796265 provision.go:87] duration metric: took 433.302933ms to configureAuth
	I1115 11:50:36.554308  796265 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:50:36.554516  796265 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:36.554633  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:36.589108  796265 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:36.589527  796265 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 11:50:36.589548  796265 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:50:36.968543  796265 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:50:36.968568  796265 machine.go:97] duration metric: took 4.467688814s to provisionDockerMachine
	I1115 11:50:36.968579  796265 start.go:293] postStartSetup for "newest-cni-600818" (driver="docker")
	I1115 11:50:36.968590  796265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:50:36.968664  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:50:36.968717  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:36.989657  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.108094  796265 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:50:37.113147  796265 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:50:37.113178  796265 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:50:37.113189  796265 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:50:37.113242  796265 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:50:37.113343  796265 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:50:37.113453  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:50:37.124166  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:37.150767  796265 start.go:296] duration metric: took 182.172188ms for postStartSetup
	I1115 11:50:37.150861  796265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:50:37.150921  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:37.174384  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.286131  796265 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:50:37.292738  796265 fix.go:56] duration metric: took 5.085135741s for fixHost
	I1115 11:50:37.292761  796265 start.go:83] releasing machines lock for "newest-cni-600818", held for 5.085181961s
	I1115 11:50:37.292827  796265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-600818
	I1115 11:50:37.312052  796265 ssh_runner.go:195] Run: cat /version.json
	I1115 11:50:37.312117  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:37.313162  796265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:50:37.313298  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:37.363879  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.366935  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:37.492818  796265 ssh_runner.go:195] Run: systemctl --version
	I1115 11:50:37.593286  796265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:50:37.630016  796265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:50:37.634352  796265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:50:37.634428  796265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:50:37.642421  796265 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:50:37.642443  796265 start.go:496] detecting cgroup driver to use...
	I1115 11:50:37.642474  796265 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:50:37.642522  796265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:50:37.658308  796265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:50:37.671639  796265 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:50:37.671721  796265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:50:37.687635  796265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:50:37.701317  796265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:50:37.819670  796265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:50:37.948704  796265 docker.go:234] disabling docker service ...
	I1115 11:50:37.948816  796265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:50:37.964701  796265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:50:37.978029  796265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:50:38.108094  796265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:50:38.234612  796265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:50:38.248255  796265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:50:38.262267  796265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:50:38.262359  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.271086  796265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:50:38.271180  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.280096  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.289429  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.298948  796265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:50:38.307687  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.317892  796265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.333363  796265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:38.347110  796265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:50:38.360375  796265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:50:38.378297  796265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:38.561962  796265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:50:38.696660  796265 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:50:38.696777  796265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:50:38.700800  796265 start.go:564] Will wait 60s for crictl version
	I1115 11:50:38.700886  796265 ssh_runner.go:195] Run: which crictl
	I1115 11:50:38.704931  796265 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:50:38.732293  796265 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:50:38.732433  796265 ssh_runner.go:195] Run: crio --version
	I1115 11:50:38.760443  796265 ssh_runner.go:195] Run: crio --version
	I1115 11:50:38.794705  796265 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:50:38.797503  796265 cli_runner.go:164] Run: docker network inspect newest-cni-600818 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:50:38.818619  796265 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:50:38.822531  796265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:38.835080  796265 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 11:50:38.837852  796265 kubeadm.go:884] updating cluster {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:50:38.838007  796265 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:38.838090  796265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:38.870465  796265 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:38.870489  796265 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:50:38.870550  796265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:38.895131  796265 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:38.895151  796265 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:50:38.895159  796265 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:50:38.895254  796265 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-600818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:50:38.895336  796265 ssh_runner.go:195] Run: crio config
	I1115 11:50:38.950969  796265 cni.go:84] Creating CNI manager for ""
	I1115 11:50:38.950995  796265 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:38.951014  796265 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 11:50:38.951082  796265 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-600818 NodeName:newest-cni-600818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:50:38.951247  796265 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-600818"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:50:38.951339  796265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:50:38.959018  796265 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:50:38.959139  796265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:50:38.966640  796265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:50:38.979146  796265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:50:38.991649  796265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1115 11:50:39.007656  796265 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:50:39.011938  796265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:39.022395  796265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:39.151080  796265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:39.166685  796265 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818 for IP: 192.168.76.2
	I1115 11:50:39.166703  796265 certs.go:195] generating shared ca certs ...
	I1115 11:50:39.166719  796265 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:39.166855  796265 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:50:39.166894  796265 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:50:39.166901  796265 certs.go:257] generating profile certs ...
	I1115 11:50:39.166988  796265 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/client.key
	I1115 11:50:39.167055  796265 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key.a60e7b42
	I1115 11:50:39.167202  796265 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key
	I1115 11:50:39.167355  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:50:39.167424  796265 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:50:39.167448  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:50:39.167514  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:50:39.167570  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:50:39.167625  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:50:39.167697  796265 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:39.168361  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:50:39.187624  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:50:39.205678  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:50:39.223440  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:50:39.241495  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:50:39.260942  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 11:50:39.279014  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:50:39.297412  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/newest-cni-600818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 11:50:39.315462  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:50:39.336542  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:50:39.357969  796265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:50:39.378742  796265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:50:39.406212  796265 ssh_runner.go:195] Run: openssl version
	I1115 11:50:39.413922  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:50:39.423263  796265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:50:39.427289  796265 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:50:39.427373  796265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:50:39.468426  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:50:39.476478  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:50:39.484515  796265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:50:39.488344  796265 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:50:39.488411  796265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:50:39.532050  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:50:39.540064  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:50:39.548449  796265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:39.552191  796265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:39.552262  796265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:39.593210  796265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:50:39.601498  796265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:50:39.605549  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:50:39.646868  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:50:39.688052  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:50:39.729509  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:50:39.771171  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:50:39.812662  796265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:50:39.861313  796265 kubeadm.go:401] StartCluster: {Name:newest-cni-600818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-600818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:39.861467  796265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:50:39.861566  796265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:50:39.948906  796265 cri.go:89] found id: ""
	I1115 11:50:39.949004  796265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:50:39.965393  796265 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:50:39.965471  796265 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:50:39.965551  796265 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:50:39.991818  796265 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:50:39.992285  796265 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-600818" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:39.992520  796265 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-600818" cluster setting kubeconfig missing "newest-cni-600818" context setting]
	I1115 11:50:39.992893  796265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:39.994475  796265 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:50:40.035141  796265 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 11:50:40.035179  796265 kubeadm.go:602] duration metric: took 69.694251ms to restartPrimaryControlPlane
	I1115 11:50:40.035222  796265 kubeadm.go:403] duration metric: took 173.93672ms to StartCluster
	I1115 11:50:40.035240  796265 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:40.035335  796265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:40.036203  796265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:40.036527  796265 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:50:40.037140  796265 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:50:40.037236  796265 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-600818"
	I1115 11:50:40.037273  796265 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-600818"
	W1115 11:50:40.037284  796265 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:50:40.037312  796265 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:40.038125  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.038510  796265 config.go:182] Loaded profile config "newest-cni-600818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:40.038742  796265 addons.go:70] Setting dashboard=true in profile "newest-cni-600818"
	I1115 11:50:40.038765  796265 addons.go:239] Setting addon dashboard=true in "newest-cni-600818"
	W1115 11:50:40.038772  796265 addons.go:248] addon dashboard should already be in state true
	I1115 11:50:40.038809  796265 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:40.039246  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.040315  796265 addons.go:70] Setting default-storageclass=true in profile "newest-cni-600818"
	I1115 11:50:40.040396  796265 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-600818"
	I1115 11:50:40.040795  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.044005  796265 out.go:179] * Verifying Kubernetes components...
	I1115 11:50:40.056238  796265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:40.119578  796265 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:50:40.119675  796265 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 11:50:40.123856  796265 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:40.123886  796265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:50:40.123965  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:40.128698  796265 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:50:36.547322  797007 out.go:252] * Restarting existing docker container for "no-preload-126380" ...
	I1115 11:50:36.547402  797007 cli_runner.go:164] Run: docker start no-preload-126380
	I1115 11:50:36.829683  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:36.857025  797007 kic.go:430] container "no-preload-126380" state is running.
	I1115 11:50:36.857412  797007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:50:36.889091  797007 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/config.json ...
	I1115 11:50:36.889332  797007 machine.go:94] provisionDockerMachine start ...
	I1115 11:50:36.889400  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:36.915214  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:36.915529  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:36.915544  797007 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:50:36.917498  797007 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 11:50:40.119377  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:50:40.119400  797007 ubuntu.go:182] provisioning hostname "no-preload-126380"
	I1115 11:50:40.119470  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:40.195305  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:40.195625  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:40.195637  797007 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-126380 && echo "no-preload-126380" | sudo tee /etc/hostname
	I1115 11:50:40.426167  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-126380
	
	I1115 11:50:40.426319  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:40.458743  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:40.459049  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:40.459066  797007 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-126380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-126380/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-126380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:50:40.649776  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:50:40.649821  797007 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:50:40.649852  797007 ubuntu.go:190] setting up certificates
	I1115 11:50:40.649861  797007 provision.go:84] configureAuth start
	I1115 11:50:40.649928  797007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:50:40.675359  797007 provision.go:143] copyHostCerts
	I1115 11:50:40.675427  797007 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:50:40.675445  797007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:50:40.675528  797007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:50:40.675627  797007 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:50:40.675638  797007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:50:40.675665  797007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:50:40.675721  797007 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:50:40.675730  797007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:50:40.675754  797007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:50:40.675803  797007 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.no-preload-126380 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-126380]
	I1115 11:50:41.086185  797007 provision.go:177] copyRemoteCerts
	I1115 11:50:41.086276  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:50:41.086326  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:41.106402  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:41.223102  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:50:41.258109  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 11:50:40.133004  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:50:40.133033  796265 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:50:40.133113  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:40.135326  796265 addons.go:239] Setting addon default-storageclass=true in "newest-cni-600818"
	W1115 11:50:40.135344  796265 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:50:40.135369  796265 host.go:66] Checking if "newest-cni-600818" exists ...
	I1115 11:50:40.135785  796265 cli_runner.go:164] Run: docker container inspect newest-cni-600818 --format={{.State.Status}}
	I1115 11:50:40.226109  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:40.230998  796265 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:40.231021  796265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:50:40.231084  796265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-600818
	I1115 11:50:40.233268  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:40.269744  796265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/newest-cni-600818/id_rsa Username:docker}
	I1115 11:50:40.489148  796265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:40.622502  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:50:40.622526  796265 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:50:40.652126  796265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:40.753619  796265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:40.769386  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:50:40.769423  796265 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:50:40.934252  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:50:40.934275  796265 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:50:41.038077  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:50:41.038101  796265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:50:41.113283  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:50:41.113312  796265 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:50:41.145173  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:50:41.145198  796265 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:50:41.181056  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:50:41.181080  796265 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:50:41.205992  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:50:41.206017  796265 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:50:41.231640  796265 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:50:41.231677  796265 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:50:41.258461  796265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:50:41.287584  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 11:50:41.330322  797007 provision.go:87] duration metric: took 680.441576ms to configureAuth
	I1115 11:50:41.330347  797007 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:50:41.330538  797007 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:41.330643  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:41.382324  797007 main.go:143] libmachine: Using SSH client type: native
	I1115 11:50:41.382636  797007 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1115 11:50:41.382650  797007 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:50:41.849362  797007 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:50:41.849459  797007 machine.go:97] duration metric: took 4.96010913s to provisionDockerMachine
	I1115 11:50:41.849491  797007 start.go:293] postStartSetup for "no-preload-126380" (driver="docker")
	I1115 11:50:41.849534  797007 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:50:41.849621  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:50:41.849698  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:41.894448  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.017409  797007 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:50:42.022660  797007 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:50:42.022688  797007 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:50:42.022708  797007 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:50:42.022768  797007 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:50:42.022846  797007 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:50:42.022948  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:50:42.034335  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:42.074669  797007 start.go:296] duration metric: took 225.12941ms for postStartSetup
	I1115 11:50:42.074800  797007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:50:42.074934  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:42.116026  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.257459  797007 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:50:42.267725  797007 fix.go:56] duration metric: took 5.746003726s for fixHost
	I1115 11:50:42.267754  797007 start.go:83] releasing machines lock for "no-preload-126380", held for 5.746060219s
	I1115 11:50:42.267848  797007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-126380
	I1115 11:50:42.293821  797007 ssh_runner.go:195] Run: cat /version.json
	I1115 11:50:42.293891  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:42.294131  797007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:50:42.294195  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:42.325071  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.341061  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:42.464613  797007 ssh_runner.go:195] Run: systemctl --version
	I1115 11:50:42.595464  797007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:50:42.652188  797007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:50:42.661924  797007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:50:42.662019  797007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:50:42.675999  797007 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 11:50:42.676022  797007 start.go:496] detecting cgroup driver to use...
	I1115 11:50:42.676076  797007 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:50:42.676143  797007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:50:42.705099  797007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:50:42.726900  797007 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:50:42.727019  797007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:50:42.749896  797007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:50:42.781444  797007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:50:42.980447  797007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:50:43.194580  797007 docker.go:234] disabling docker service ...
	I1115 11:50:43.194728  797007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:50:43.226697  797007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:50:43.250985  797007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:50:43.478864  797007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:50:43.698595  797007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:50:43.712931  797007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:50:43.731691  797007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:50:43.731816  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.749429  797007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:50:43.749497  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.764286  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.780657  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.796600  797007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:50:43.809563  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.820456  797007 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.833450  797007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:50:43.846236  797007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:50:43.857847  797007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:50:43.871768  797007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:44.055554  797007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:50:44.253250  797007 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:50:44.253390  797007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:50:44.259434  797007 start.go:564] Will wait 60s for crictl version
	I1115 11:50:44.259552  797007 ssh_runner.go:195] Run: which crictl
	I1115 11:50:44.269756  797007 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:50:44.320757  797007 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:50:44.323425  797007 ssh_runner.go:195] Run: crio --version
	I1115 11:50:44.393779  797007 ssh_runner.go:195] Run: crio --version
	I1115 11:50:44.459161  797007 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:50:44.462151  797007 cli_runner.go:164] Run: docker network inspect no-preload-126380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:50:44.487121  797007 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 11:50:44.492214  797007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:44.514364  797007 kubeadm.go:884] updating cluster {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:50:44.514479  797007 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:50:44.514520  797007 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:50:44.590628  797007 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:50:44.590649  797007 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:50:44.590656  797007 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 11:50:44.590754  797007 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-126380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:50:44.590833  797007 ssh_runner.go:195] Run: crio config
	I1115 11:50:44.667923  797007 cni.go:84] Creating CNI manager for ""
	I1115 11:50:44.667949  797007 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:50:44.667996  797007 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:50:44.668027  797007 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-126380 NodeName:no-preload-126380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:50:44.668198  797007 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-126380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:50:44.668286  797007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:50:44.678363  797007 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:50:44.678456  797007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:50:44.691102  797007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 11:50:44.714008  797007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:50:44.737390  797007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 11:50:44.761512  797007 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:50:44.768433  797007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:50:44.780041  797007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:44.983687  797007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:45.001226  797007 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380 for IP: 192.168.85.2
	I1115 11:50:45.001251  797007 certs.go:195] generating shared ca certs ...
	I1115 11:50:45.001283  797007 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:45.001527  797007 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:50:45.001585  797007 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:50:45.001594  797007 certs.go:257] generating profile certs ...
	I1115 11:50:45.001696  797007 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.key
	I1115 11:50:45.001766  797007 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key.d85d6acb
	I1115 11:50:45.001809  797007 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key
	I1115 11:50:45.001932  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:50:45.001966  797007 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:50:45.001977  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:50:45.002002  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:50:45.002025  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:50:45.002047  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:50:45.002090  797007 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:50:45.002743  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:50:45.108098  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:50:45.160411  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:50:45.207988  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:50:45.279794  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 11:50:45.378346  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:50:45.441575  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:50:45.485380  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:50:45.518471  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:50:45.552077  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:50:45.590104  797007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:50:45.629953  797007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:50:45.654237  797007 ssh_runner.go:195] Run: openssl version
	I1115 11:50:45.666630  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:50:45.679967  797007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:50:45.684180  797007 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:50:45.684287  797007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:50:45.731731  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:50:45.739584  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:50:45.748123  797007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:45.752345  797007 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:45.752470  797007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:50:45.794478  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:50:45.802793  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:50:45.811527  797007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:50:45.820350  797007 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:50:45.820473  797007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:50:45.866318  797007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:50:45.874800  797007 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:50:45.879492  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 11:50:45.944532  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 11:50:46.035899  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 11:50:46.115428  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 11:50:46.191551  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 11:50:46.322813  797007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 11:50:46.462645  797007 kubeadm.go:401] StartCluster: {Name:no-preload-126380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-126380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:50:46.462789  797007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:50:46.462888  797007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:50:46.596455  797007 cri.go:89] found id: "ff27b73ca8f1765a9b5e411c5c5a50ecdc283b3f9ac1d25c020e18cc04187039"
	I1115 11:50:46.596527  797007 cri.go:89] found id: "16ac7fdb8e9ed235613c8255c801b9a65efe815d89103579d0f55fa48408628f"
	I1115 11:50:46.596547  797007 cri.go:89] found id: "ab769dc54851c40c74b065b75a3f67d4f8d0132a1f1e065c9daa886d8665fdc7"
	I1115 11:50:46.596566  797007 cri.go:89] found id: "57c368e28f36eee195d648e761727c0670d2cfaa223fa5be99062e847379937c"
	I1115 11:50:46.596584  797007 cri.go:89] found id: ""
	I1115 11:50:46.596662  797007 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 11:50:46.648397  797007 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:50:46Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:50:46.648533  797007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:50:46.666704  797007 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 11:50:46.666776  797007 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 11:50:46.666855  797007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 11:50:46.693517  797007 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 11:50:46.694250  797007 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-126380" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:46.694571  797007 kubeconfig.go:62] /home/jenkins/minikube-integration/21894-584713/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-126380" cluster setting kubeconfig missing "no-preload-126380" context setting]
	I1115 11:50:46.695148  797007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:46.697112  797007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 11:50:46.714303  797007 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 11:50:46.714387  797007 kubeadm.go:602] duration metric: took 47.589272ms to restartPrimaryControlPlane
	I1115 11:50:46.714411  797007 kubeadm.go:403] duration metric: took 251.774665ms to StartCluster
	I1115 11:50:46.714453  797007 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:46.714545  797007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:50:46.715546  797007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:50:46.715813  797007 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:50:46.716167  797007 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:50:46.716243  797007 addons.go:70] Setting storage-provisioner=true in profile "no-preload-126380"
	I1115 11:50:46.716258  797007 addons.go:239] Setting addon storage-provisioner=true in "no-preload-126380"
	W1115 11:50:46.716263  797007 addons.go:248] addon storage-provisioner should already be in state true
	I1115 11:50:46.716284  797007 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:50:46.717180  797007 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:50:46.717333  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.717501  797007 addons.go:70] Setting dashboard=true in profile "no-preload-126380"
	I1115 11:50:46.717547  797007 addons.go:239] Setting addon dashboard=true in "no-preload-126380"
	W1115 11:50:46.717567  797007 addons.go:248] addon dashboard should already be in state true
	I1115 11:50:46.717602  797007 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:50:46.717724  797007 addons.go:70] Setting default-storageclass=true in profile "no-preload-126380"
	I1115 11:50:46.717737  797007 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-126380"
	I1115 11:50:46.718047  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.718578  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.720783  797007 out.go:179] * Verifying Kubernetes components...
	I1115 11:50:46.725934  797007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:50:46.778412  797007 addons.go:239] Setting addon default-storageclass=true in "no-preload-126380"
	W1115 11:50:46.778552  797007 addons.go:248] addon default-storageclass should already be in state true
	I1115 11:50:46.778581  797007 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:50:46.779001  797007 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:50:46.780223  797007 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:50:46.783291  797007 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:46.783323  797007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:50:46.783395  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:46.803955  797007 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 11:50:46.806954  797007 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 11:50:50.422560  796265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.933378785s)
	I1115 11:50:50.422618  796265 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.770467034s)
	I1115 11:50:50.422652  796265 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:50:50.422706  796265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:50:50.422790  796265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.669146081s)
	I1115 11:50:50.631482  796265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.372980484s)
	I1115 11:50:50.631809  796265 api_server.go:72] duration metric: took 10.595231625s to wait for apiserver process to appear ...
	I1115 11:50:50.631864  796265 api_server.go:88] waiting for apiserver healthz status ...
	I1115 11:50:50.631895  796265 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 11:50:50.634808  796265 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-600818 addons enable metrics-server
	
	I1115 11:50:50.637678  796265 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 11:50:50.640550  796265 addons.go:515] duration metric: took 10.603402606s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 11:50:50.664565  796265 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 11:50:50.667042  796265 api_server.go:141] control plane version: v1.34.1
	I1115 11:50:50.667065  796265 api_server.go:131] duration metric: took 35.18234ms to wait for apiserver health ...
	I1115 11:50:50.667075  796265 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 11:50:50.670708  796265 system_pods.go:59] 8 kube-system pods found
	I1115 11:50:50.670787  796265 system_pods.go:61] "coredns-66bc5c9577-k2pmf" [6eb5cbde-f6a1-4680-ac07-4a2b6e15d42f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 11:50:50.670814  796265 system_pods.go:61] "etcd-newest-cni-600818" [32466f92-ecfd-446f-bfe9-68cf519b2b89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 11:50:50.670857  796265 system_pods.go:61] "kindnet-bcvw7" [75bd6a1d-29ff-4420-982f-97b36c4b5830] Running
	I1115 11:50:50.670883  796265 system_pods.go:61] "kube-apiserver-newest-cni-600818" [443d9983-0c4e-4303-89ec-1a6e18c316ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 11:50:50.670905  796265 system_pods.go:61] "kube-controller-manager-newest-cni-600818" [b43750ab-bb60-4d03-8054-ddcd38bc1c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 11:50:50.670940  796265 system_pods.go:61] "kube-proxy-kms5c" [2446e186-b744-4098-b190-0a98b30804fd] Running
	I1115 11:50:50.670966  796265 system_pods.go:61] "kube-scheduler-newest-cni-600818" [be75d8e9-f0e3-419b-85a5-702fd1fc2975] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 11:50:50.670990  796265 system_pods.go:61] "storage-provisioner" [070b587d-9d48-4f2a-9b68-11cc8e004b8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 11:50:50.671025  796265 system_pods.go:74] duration metric: took 3.943934ms to wait for pod list to return data ...
	I1115 11:50:50.671052  796265 default_sa.go:34] waiting for default service account to be created ...
	I1115 11:50:50.690075  796265 default_sa.go:45] found service account: "default"
	I1115 11:50:50.690151  796265 default_sa.go:55] duration metric: took 19.076367ms for default service account to be created ...
	I1115 11:50:50.690178  796265 kubeadm.go:587] duration metric: took 10.653602211s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 11:50:50.690223  796265 node_conditions.go:102] verifying NodePressure condition ...
	I1115 11:50:50.748879  796265 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 11:50:50.748976  796265 node_conditions.go:123] node cpu capacity is 2
	I1115 11:50:50.749003  796265 node_conditions.go:105] duration metric: took 58.758767ms to run NodePressure ...
	I1115 11:50:50.749032  796265 start.go:242] waiting for startup goroutines ...
	I1115 11:50:50.749072  796265 start.go:247] waiting for cluster config update ...
	I1115 11:50:50.749096  796265 start.go:256] writing updated cluster config ...
	I1115 11:50:50.749475  796265 ssh_runner.go:195] Run: rm -f paused
	I1115 11:50:50.863041  796265 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:50:50.868073  796265 out.go:179] * Done! kubectl is now configured to use "newest-cni-600818" cluster and "default" namespace by default
	I1115 11:50:46.809750  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 11:50:46.809776  797007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 11:50:46.809844  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:46.829294  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:46.832703  797007 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:46.832724  797007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:50:46.832787  797007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:50:46.865069  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:46.883210  797007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:50:47.243601  797007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:50:47.329722  797007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:50:47.336350  797007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:50:47.341362  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 11:50:47.341434  797007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 11:50:47.542798  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 11:50:47.542871  797007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 11:50:47.637197  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 11:50:47.637279  797007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 11:50:47.788987  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 11:50:47.789061  797007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 11:50:47.893817  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 11:50:47.893891  797007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 11:50:47.924175  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 11:50:47.924248  797007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 11:50:47.962364  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 11:50:47.962444  797007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 11:50:48.011004  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 11:50:48.011093  797007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 11:50:48.055159  797007 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:50:48.055242  797007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 11:50:48.109894  797007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 11:50:57.186704  797007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.943074534s)
	I1115 11:50:57.186761  797007 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.857021197s)
	I1115 11:50:57.186791  797007 node_ready.go:35] waiting up to 6m0s for node "no-preload-126380" to be "Ready" ...
	I1115 11:50:57.187115  797007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.850743322s)
	I1115 11:50:57.187374  797007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.077400326s)
	I1115 11:50:57.190524  797007 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-126380 addons enable metrics-server
	
	I1115 11:50:57.217507  797007 node_ready.go:49] node "no-preload-126380" is "Ready"
	I1115 11:50:57.217588  797007 node_ready.go:38] duration metric: took 30.783911ms for node "no-preload-126380" to be "Ready" ...
	I1115 11:50:57.217648  797007 api_server.go:52] waiting for apiserver process to appear ...
	I1115 11:50:57.217721  797007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:50:57.226244  797007 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.952072419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.954472592Z" level=info msg="Running pod sandbox: kube-system/kindnet-bcvw7/POD" id=a0ca9fba-576d-4c23-906a-64c14cf16599 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.954527181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.963586057Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e08335e8-6b49-436f-a017-251f3bdf3bc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:47 newest-cni-600818 crio[614]: time="2025-11-15T11:50:47.977129402Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a0ca9fba-576d-4c23-906a-64c14cf16599 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.01489915Z" level=info msg="Ran pod sandbox 84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43 with infra container: kube-system/kindnet-bcvw7/POD" id=a0ca9fba-576d-4c23-906a-64c14cf16599 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.023563503Z" level=info msg="Ran pod sandbox b1d073351984372a7dbc5f0709fcb167a8a76e2776bdf6b35b593768999ae290 with infra container: kube-system/kube-proxy-kms5c/POD" id=e08335e8-6b49-436f-a017-251f3bdf3bc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.034796507Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b58525da-1cf5-461b-bdd3-00d247c26945 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.048393465Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4fe819a9-e4bf-4ab3-970b-807bbfa030a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.066010486Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3249c102-4e90-4da8-b5f2-a45d40a61092 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.066796758Z" level=info msg="Creating container: kube-system/kindnet-bcvw7/kindnet-cni" id=19612d60-5c18-40ad-b379-17016619604a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.06698826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.082231594Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=60ed5b1c-9657-41e5-9276-b79b96e37b97 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.099846933Z" level=info msg="Creating container: kube-system/kube-proxy-kms5c/kube-proxy" id=f0db0400-e147-4538-9187-9b694b764568 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.100138751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.116325095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.125942418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.127633192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.128222647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.230001029Z" level=info msg="Created container 508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb: kube-system/kindnet-bcvw7/kindnet-cni" id=19612d60-5c18-40ad-b379-17016619604a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.233110912Z" level=info msg="Starting container: 508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb" id=30f29745-2f2e-41bc-a66d-7aa039dd7809 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.240610921Z" level=info msg="Created container 6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755: kube-system/kube-proxy-kms5c/kube-proxy" id=f0db0400-e147-4538-9187-9b694b764568 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.241569774Z" level=info msg="Starting container: 6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755" id=93603d69-9639-4aac-a851-29f52b1608a3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.246703571Z" level=info msg="Started container" PID=1056 containerID=508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb description=kube-system/kindnet-bcvw7/kindnet-cni id=30f29745-2f2e-41bc-a66d-7aa039dd7809 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43
	Nov 15 11:50:48 newest-cni-600818 crio[614]: time="2025-11-15T11:50:48.261140554Z" level=info msg="Started container" PID=1054 containerID=6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755 description=kube-system/kube-proxy-kms5c/kube-proxy id=93603d69-9639-4aac-a851-29f52b1608a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1d073351984372a7dbc5f0709fcb167a8a76e2776bdf6b35b593768999ae290
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	508191357d8c4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   10 seconds ago      Running             kindnet-cni               1                   84396a8c660d9       kindnet-bcvw7                               kube-system
	6b6b51789b97b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   10 seconds ago      Running             kube-proxy                1                   b1d0733519843       kube-proxy-kms5c                            kube-system
	645246f782520       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            1                   d63ba10b69c5c       kube-scheduler-newest-cni-600818            kube-system
	fd7399c25f9e0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   1                   1fd1e55f31e4b       kube-controller-manager-newest-cni-600818   kube-system
	11203d5b5b356       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            1                   e3aad0da4358e       kube-apiserver-newest-cni-600818            kube-system
	80865ff5e22d4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      1                   39e6287d4fcdd       etcd-newest-cni-600818                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-600818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-600818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=newest-cni-600818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_50_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:50:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-600818
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:50:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 11:50:47 +0000   Sat, 15 Nov 2025 11:50:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-600818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c022f560-be97-45fe-81fb-2d2f59506bb6
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-600818                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-bcvw7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-600818             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-600818    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-kms5c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-600818             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           32s                node-controller  Node newest-cni-600818 event: Registered Node newest-cni-600818 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-600818 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-600818 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-600818 event: Registered Node newest-cni-600818 in Controller
	
	
	==> dmesg <==
	[  +1.127957] overlayfs: idmapped layers are currently not supported
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	[Nov15 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.578289] overlayfs: idmapped layers are currently not supported
	[  +6.063974] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [80865ff5e22d408a46025735f288fbc8807cecdd6680ae8eadc50da5c41cd3e6] <==
	{"level":"warn","ts":"2025-11-15T11:50:44.034245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.073119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.117646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.172176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.210713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.247134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.287271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.332589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.352971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.402305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.435767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.473232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.509341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.542108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.591987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.605095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.631498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.654660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.682181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.704278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.736429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.758606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.792649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:44.806435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:45.002610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:50:59 up  3:33,  0 user,  load average: 5.01, 3.68, 3.03
	Linux newest-cni-600818 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [508191357d8c46202caf105d2b19322ed0fce00bbd8bb676251d37ea88caa5fb] <==
	I1115 11:50:48.399334       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:50:48.425090       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 11:50:48.425226       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:50:48.425245       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:50:48.425266       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:50:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:50:48.611609       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:50:48.611628       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:50:48.611637       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:50:48.611914       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [11203d5b5b35660740cae26a3b2082fe96faeca680bc0c57a5eb2ba26511cba1] <==
	I1115 11:50:47.348248       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:50:47.443402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:50:47.449030       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 11:50:47.449063       1 policy_source.go:240] refreshing policies
	I1115 11:50:47.449250       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 11:50:47.449329       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:50:47.450891       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 11:50:47.451633       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:50:47.451670       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:50:47.451677       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:50:47.477007       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:50:47.477101       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 11:50:47.497884       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 11:50:47.499902       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:50:47.560147       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:50:49.767019       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:50:50.048731       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:50:50.227403       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:50:50.291756       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:50:50.587093       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.185.188"}
	I1115 11:50:50.621325       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.232.154"}
	I1115 11:50:52.069272       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:50:52.200918       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 11:50:52.339336       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:50:52.423032       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [fd7399c25f9e0b5ec2bd454e0007c03228ee3b5f4d4bf00dc22c645038b07897] <==
	I1115 11:50:51.893299       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 11:50:51.942769       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"newest-cni-600818\" does not exist"
	I1115 11:50:51.976902       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 11:50:51.960672       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 11:50:51.960695       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 11:50:51.964277       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:50:51.986089       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:50:51.967213       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 11:50:51.986489       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:50:51.986508       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:50:51.986528       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 11:50:51.995509       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:50:52.005761       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 11:50:52.005876       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 11:50:52.012982       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 11:50:52.013101       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-600818"
	I1115 11:50:52.013179       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 11:50:52.005892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:50:52.014791       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:50:52.026269       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:50:52.026353       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:50:52.026389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:50:52.026468       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:50:52.033788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 11:50:52.034346       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [6b6b51789b97b7a45064b14b4d84c0c009313bbad64adbc4381219fd21228755] <==
	I1115 11:50:49.657640       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:50:50.014396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:50:50.614521       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:50:50.647108       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 11:50:50.647332       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:50:51.220473       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:50:51.220590       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:50:51.231917       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:50:51.232298       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:50:51.232363       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:51.233945       1 config.go:200] "Starting service config controller"
	I1115 11:50:51.240906       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:50:51.240986       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:50:51.241022       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:50:51.241066       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:50:51.241106       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:50:51.241872       1 config.go:309] "Starting node config controller"
	I1115 11:50:51.241943       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:50:51.241975       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:50:51.341785       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:50:51.341890       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:50:51.341960       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [645246f7825202338380cb5d10ceb9da92cdfc53e1f942510d2442a0fd84a097] <==
	I1115 11:50:43.758708       1 serving.go:386] Generated self-signed cert in-memory
	I1115 11:50:50.231487       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 11:50:50.232306       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:50.260810       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 11:50:50.260928       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 11:50:50.261007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:50.261043       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:50.261086       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:50:50.261129       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 11:50:50.262558       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:50:50.262667       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 11:50:50.365083       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 11:50:50.365314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:50.366091       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:50:42 newest-cni-600818 kubelet[729]: E1115 11:50:42.961235     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-600818\" not found" node="newest-cni-600818"
	Nov 15 11:50:46 newest-cni-600818 kubelet[729]: I1115 11:50:46.419289     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.311349     729 apiserver.go:52] "Watching apiserver"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.417829     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506433     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-cni-cfg\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506482     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-xtables-lock\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506505     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2446e186-b744-4098-b190-0a98b30804fd-xtables-lock\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506523     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75bd6a1d-29ff-4420-982f-97b36c4b5830-lib-modules\") pod \"kindnet-bcvw7\" (UID: \"75bd6a1d-29ff-4420-982f-97b36c4b5830\") " pod="kube-system/kindnet-bcvw7"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.506566     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2446e186-b744-4098-b190-0a98b30804fd-lib-modules\") pod \"kube-proxy-kms5c\" (UID: \"2446e186-b744-4098-b190-0a98b30804fd\") " pod="kube-system/kube-proxy-kms5c"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.638365     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-600818\" already exists" pod="kube-system/kube-scheduler-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.638413     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.685044     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.735644     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-600818\" already exists" pod="kube-system/etcd-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.735681     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.797284     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.797389     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.797421     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.798655     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.814964     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-600818\" already exists" pod="kube-system/kube-apiserver-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: I1115 11:50:47.815008     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-600818"
	Nov 15 11:50:47 newest-cni-600818 kubelet[729]: E1115 11:50:47.891738     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-600818\" already exists" pod="kube-system/kube-controller-manager-newest-cni-600818"
	Nov 15 11:50:48 newest-cni-600818 kubelet[729]: W1115 11:50:48.009456     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/533b7ee97cf476f14ee4e5c6dc254198104b1b7e7d20399694f315165eb2e59b/crio-84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43 WatchSource:0}: Error finding container 84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43: Status 404 returned error can't find the container with id 84396a8c660d9c26cc79bf0c0da2577843dd393e518afc793d94252238d46d43
	Nov 15 11:50:52 newest-cni-600818 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:50:52 newest-cni-600818 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:50:52 newest-cni-600818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-600818 -n newest-cni-600818
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-600818 -n newest-cni-600818: exit status 2 (445.541668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-600818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d: exit status 1 (142.894893ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-k2pmf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nlkr7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rgp9d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-600818 describe pod coredns-66bc5c9577-k2pmf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nlkr7 kubernetes-dashboard-855c9754f9-rgp9d: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (8.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-126380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-126380 --alsologtostderr -v=1: exit status 80 (2.405455113s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-126380 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:51:41.469102  803860 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:51:41.469306  803860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:51:41.469311  803860 out.go:374] Setting ErrFile to fd 2...
	I1115 11:51:41.469316  803860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:51:41.469575  803860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:51:41.469825  803860 out.go:368] Setting JSON to false
	I1115 11:51:41.469843  803860 mustload.go:66] Loading cluster: no-preload-126380
	I1115 11:51:41.470226  803860 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:51:41.470700  803860 cli_runner.go:164] Run: docker container inspect no-preload-126380 --format={{.State.Status}}
	I1115 11:51:41.496985  803860 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:51:41.497325  803860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:51:41.631177  803860 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:51:41.619963105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:51:41.631811  803860 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-126380 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 11:51:41.635124  803860 out.go:179] * Pausing node no-preload-126380 ... 
	I1115 11:51:41.638039  803860 host.go:66] Checking if "no-preload-126380" exists ...
	I1115 11:51:41.638391  803860 ssh_runner.go:195] Run: systemctl --version
	I1115 11:51:41.638444  803860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-126380
	I1115 11:51:41.664909  803860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/no-preload-126380/id_rsa Username:docker}
	I1115 11:51:41.782939  803860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:51:41.830057  803860 pause.go:52] kubelet running: true
	I1115 11:51:41.830176  803860 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:51:42.264706  803860 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:51:42.264812  803860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:51:42.372358  803860 cri.go:89] found id: "e2a9e1acb1639da928e33bedd0d68edc4ebd14bd8de4a13663336f08668a6608"
	I1115 11:51:42.372381  803860 cri.go:89] found id: "b44ac911fc88c96498566ce772c1348e18d74da236c4b629cf05e9fa0d9d4ebe"
	I1115 11:51:42.372386  803860 cri.go:89] found id: "40537f2f9d73f24a3fc919e58c7be26902dcd9503e84419051fec20be5efa20d"
	I1115 11:51:42.372390  803860 cri.go:89] found id: "1b01f6a4fe6ad8a2d4e70a06ee23f3e1ea000ca7c3a2d3c66dd46c7a32a460a4"
	I1115 11:51:42.372420  803860 cri.go:89] found id: "e54819dbda5570f90b23e33d6f1b1635479dd9063dc6c9be60485bc7fd5e933c"
	I1115 11:51:42.372431  803860 cri.go:89] found id: "ff27b73ca8f1765a9b5e411c5c5a50ecdc283b3f9ac1d25c020e18cc04187039"
	I1115 11:51:42.372434  803860 cri.go:89] found id: "16ac7fdb8e9ed235613c8255c801b9a65efe815d89103579d0f55fa48408628f"
	I1115 11:51:42.372438  803860 cri.go:89] found id: "ab769dc54851c40c74b065b75a3f67d4f8d0132a1f1e065c9daa886d8665fdc7"
	I1115 11:51:42.372441  803860 cri.go:89] found id: "57c368e28f36eee195d648e761727c0670d2cfaa223fa5be99062e847379937c"
	I1115 11:51:42.372448  803860 cri.go:89] found id: "689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d"
	I1115 11:51:42.372455  803860 cri.go:89] found id: "6ba4ebfd5350b614116a1165ef7e8d2c6becd498c8a4d4af5dbdf487b9e37cb9"
	I1115 11:51:42.372458  803860 cri.go:89] found id: ""
	I1115 11:51:42.372522  803860 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:51:42.397686  803860 retry.go:31] will retry after 335.388886ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:51:42Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:51:42.734063  803860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:51:42.753142  803860 pause.go:52] kubelet running: false
	I1115 11:51:42.753231  803860 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:51:42.950411  803860 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:51:42.950512  803860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:51:43.027568  803860 cri.go:89] found id: "e2a9e1acb1639da928e33bedd0d68edc4ebd14bd8de4a13663336f08668a6608"
	I1115 11:51:43.027593  803860 cri.go:89] found id: "b44ac911fc88c96498566ce772c1348e18d74da236c4b629cf05e9fa0d9d4ebe"
	I1115 11:51:43.027598  803860 cri.go:89] found id: "40537f2f9d73f24a3fc919e58c7be26902dcd9503e84419051fec20be5efa20d"
	I1115 11:51:43.027602  803860 cri.go:89] found id: "1b01f6a4fe6ad8a2d4e70a06ee23f3e1ea000ca7c3a2d3c66dd46c7a32a460a4"
	I1115 11:51:43.027631  803860 cri.go:89] found id: "e54819dbda5570f90b23e33d6f1b1635479dd9063dc6c9be60485bc7fd5e933c"
	I1115 11:51:43.027645  803860 cri.go:89] found id: "ff27b73ca8f1765a9b5e411c5c5a50ecdc283b3f9ac1d25c020e18cc04187039"
	I1115 11:51:43.027649  803860 cri.go:89] found id: "16ac7fdb8e9ed235613c8255c801b9a65efe815d89103579d0f55fa48408628f"
	I1115 11:51:43.027652  803860 cri.go:89] found id: "ab769dc54851c40c74b065b75a3f67d4f8d0132a1f1e065c9daa886d8665fdc7"
	I1115 11:51:43.027656  803860 cri.go:89] found id: "57c368e28f36eee195d648e761727c0670d2cfaa223fa5be99062e847379937c"
	I1115 11:51:43.027667  803860 cri.go:89] found id: "689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d"
	I1115 11:51:43.027676  803860 cri.go:89] found id: "6ba4ebfd5350b614116a1165ef7e8d2c6becd498c8a4d4af5dbdf487b9e37cb9"
	I1115 11:51:43.027679  803860 cri.go:89] found id: ""
	I1115 11:51:43.027742  803860 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:51:43.044651  803860 retry.go:31] will retry after 447.050798ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:51:43Z" level=error msg="open /run/runc: no such file or directory"
	I1115 11:51:43.491972  803860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:51:43.505427  803860 pause.go:52] kubelet running: false
	I1115 11:51:43.505529  803860 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 11:51:43.677527  803860 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 11:51:43.677614  803860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 11:51:43.751942  803860 cri.go:89] found id: "e2a9e1acb1639da928e33bedd0d68edc4ebd14bd8de4a13663336f08668a6608"
	I1115 11:51:43.751965  803860 cri.go:89] found id: "b44ac911fc88c96498566ce772c1348e18d74da236c4b629cf05e9fa0d9d4ebe"
	I1115 11:51:43.751970  803860 cri.go:89] found id: "40537f2f9d73f24a3fc919e58c7be26902dcd9503e84419051fec20be5efa20d"
	I1115 11:51:43.751973  803860 cri.go:89] found id: "1b01f6a4fe6ad8a2d4e70a06ee23f3e1ea000ca7c3a2d3c66dd46c7a32a460a4"
	I1115 11:51:43.751977  803860 cri.go:89] found id: "e54819dbda5570f90b23e33d6f1b1635479dd9063dc6c9be60485bc7fd5e933c"
	I1115 11:51:43.751981  803860 cri.go:89] found id: "ff27b73ca8f1765a9b5e411c5c5a50ecdc283b3f9ac1d25c020e18cc04187039"
	I1115 11:51:43.751984  803860 cri.go:89] found id: "16ac7fdb8e9ed235613c8255c801b9a65efe815d89103579d0f55fa48408628f"
	I1115 11:51:43.751987  803860 cri.go:89] found id: "ab769dc54851c40c74b065b75a3f67d4f8d0132a1f1e065c9daa886d8665fdc7"
	I1115 11:51:43.751990  803860 cri.go:89] found id: "57c368e28f36eee195d648e761727c0670d2cfaa223fa5be99062e847379937c"
	I1115 11:51:43.751996  803860 cri.go:89] found id: "689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d"
	I1115 11:51:43.752000  803860 cri.go:89] found id: "6ba4ebfd5350b614116a1165ef7e8d2c6becd498c8a4d4af5dbdf487b9e37cb9"
	I1115 11:51:43.752002  803860 cri.go:89] found id: ""
	I1115 11:51:43.752050  803860 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 11:51:43.769043  803860 out.go:203] 
	W1115 11:51:43.771912  803860 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:51:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T11:51:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 11:51:43.771988  803860 out.go:285] * 
	* 
	W1115 11:51:43.778367  803860 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 11:51:43.783272  803860 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-126380 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-126380
helpers_test.go:243: (dbg) docker inspect no-preload-126380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf",
	        "Created": "2025-11-15T11:49:07.318214347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 797181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:50:36.587194536Z",
	            "FinishedAt": "2025-11-15T11:50:35.622894519Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/hosts",
	        "LogPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf-json.log",
	        "Name": "/no-preload-126380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-126380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-126380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf",
	                "LowerDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-126380",
	                "Source": "/var/lib/docker/volumes/no-preload-126380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-126380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-126380",
	                "name.minikube.sigs.k8s.io": "no-preload-126380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19dae6f2d10522c657b37740de12557e6daf7ba316e392a49e313aa6e27d8b69",
	            "SandboxKey": "/var/run/docker/netns/19dae6f2d105",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-126380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:23:42:7f:f8:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1b9530ecfade28bc16fd6c10682aa7624f38192683bf3f788bebea9faf0c447",
	                    "EndpointID": "6bbe20faf637cc2aea4b1df5689ccc9d99a0dab10f79735066371fa56915d30e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-126380",
	                        "0b66713a6755"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380: exit status 2 (361.88133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-126380 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-126380 logs -n 25: (1.47309801s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p no-preload-126380 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-600818 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-600818 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p no-preload-126380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:51 UTC │
	│ image   │ newest-cni-600818 image list --format=json                                                                                                                                                                                                    │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ pause   │ -p newest-cni-600818 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ delete  │ -p newest-cni-600818                                                                                                                                                                                                                          │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │ 15 Nov 25 11:51 UTC │
	│ delete  │ -p newest-cni-600818                                                                                                                                                                                                                          │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │ 15 Nov 25 11:51 UTC │
	│ start   │ -p auto-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-949287                  │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │                     │
	│ image   │ no-preload-126380 image list --format=json                                                                                                                                                                                                    │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │ 15 Nov 25 11:51 UTC │
	│ pause   │ -p no-preload-126380 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:51:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:51:02.979062  801259 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:51:02.979266  801259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:51:02.979293  801259 out.go:374] Setting ErrFile to fd 2...
	I1115 11:51:02.979311  801259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:51:02.979579  801259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:51:02.980027  801259 out.go:368] Setting JSON to false
	I1115 11:51:02.981074  801259 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12814,"bootTime":1763194649,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:51:02.981174  801259 start.go:143] virtualization:  
	I1115 11:51:02.985729  801259 out.go:179] * [auto-949287] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:51:02.989244  801259 notify.go:221] Checking for updates...
	I1115 11:51:02.993175  801259 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:51:02.999817  801259 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:51:03.004010  801259 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:51:03.007131  801259 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:51:03.010179  801259 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:51:03.013220  801259 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:51:03.016777  801259 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:51:03.016919  801259 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:51:03.058209  801259 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:51:03.058376  801259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:51:03.174939  801259 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:51:03.159786564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:51:03.175058  801259 docker.go:319] overlay module found
	I1115 11:51:03.178465  801259 out.go:179] * Using the docker driver based on user configuration
	I1115 11:51:03.181522  801259 start.go:309] selected driver: docker
	I1115 11:51:03.181547  801259 start.go:930] validating driver "docker" against <nil>
	I1115 11:51:03.181569  801259 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:51:03.182349  801259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:51:03.269920  801259 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:51:03.260312178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:51:03.270089  801259 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 11:51:03.270326  801259 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:51:03.274333  801259 out.go:179] * Using Docker driver with root privileges
	I1115 11:51:03.277415  801259 cni.go:84] Creating CNI manager for ""
	I1115 11:51:03.277491  801259 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:51:03.277508  801259 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:51:03.277598  801259 start.go:353] cluster config:
	{Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1115 11:51:03.280983  801259 out.go:179] * Starting "auto-949287" primary control-plane node in "auto-949287" cluster
	I1115 11:51:03.283966  801259 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:51:03.287023  801259 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:51:03.290028  801259 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:51:03.290084  801259 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:51:03.290094  801259 cache.go:65] Caching tarball of preloaded images
	I1115 11:51:03.290205  801259 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:51:03.290221  801259 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:51:03.290333  801259 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/config.json ...
	I1115 11:51:03.290357  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/config.json: {Name:mkccd1588c4b8b37ad192edf4ddc2068a4018ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:03.290500  801259 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:51:03.313818  801259 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:51:03.313839  801259 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:51:03.313851  801259 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:51:03.313878  801259 start.go:360] acquireMachinesLock for auto-949287: {Name:mkaf6ea366b01fa2d774c787f18844043a225252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:51:03.313981  801259 start.go:364] duration metric: took 86.286µs to acquireMachinesLock for "auto-949287"
	I1115 11:51:03.314006  801259 start.go:93] Provisioning new machine with config: &{Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:51:03.314069  801259 start.go:125] createHost starting for "" (driver="docker")
	W1115 11:51:01.841118  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:03.874023  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:03.318093  801259 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:51:03.318334  801259 start.go:159] libmachine.API.Create for "auto-949287" (driver="docker")
	I1115 11:51:03.318364  801259 client.go:173] LocalClient.Create starting
	I1115 11:51:03.318427  801259 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:51:03.318459  801259 main.go:143] libmachine: Decoding PEM data...
	I1115 11:51:03.318482  801259 main.go:143] libmachine: Parsing certificate...
	I1115 11:51:03.318537  801259 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:51:03.318559  801259 main.go:143] libmachine: Decoding PEM data...
	I1115 11:51:03.318571  801259 main.go:143] libmachine: Parsing certificate...
	I1115 11:51:03.318931  801259 cli_runner.go:164] Run: docker network inspect auto-949287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:51:03.343401  801259 cli_runner.go:211] docker network inspect auto-949287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:51:03.344940  801259 network_create.go:284] running [docker network inspect auto-949287] to gather additional debugging logs...
	I1115 11:51:03.344968  801259 cli_runner.go:164] Run: docker network inspect auto-949287
	W1115 11:51:03.369804  801259 cli_runner.go:211] docker network inspect auto-949287 returned with exit code 1
	I1115 11:51:03.369842  801259 network_create.go:287] error running [docker network inspect auto-949287]: docker network inspect auto-949287: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-949287 not found
	I1115 11:51:03.369855  801259 network_create.go:289] output of [docker network inspect auto-949287]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-949287 not found
	
	** /stderr **
	I1115 11:51:03.369979  801259 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:51:03.396331  801259 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:51:03.396657  801259 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:51:03.397063  801259 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:51:03.397494  801259 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1cbb0}
	I1115 11:51:03.397523  801259 network_create.go:124] attempt to create docker network auto-949287 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 11:51:03.397577  801259 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-949287 auto-949287
	I1115 11:51:03.471017  801259 network_create.go:108] docker network auto-949287 192.168.76.0/24 created
	I1115 11:51:03.471046  801259 kic.go:121] calculated static IP "192.168.76.2" for the "auto-949287" container
	I1115 11:51:03.471116  801259 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:51:03.487425  801259 cli_runner.go:164] Run: docker volume create auto-949287 --label name.minikube.sigs.k8s.io=auto-949287 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:51:03.510277  801259 oci.go:103] Successfully created a docker volume auto-949287
	I1115 11:51:03.510371  801259 cli_runner.go:164] Run: docker run --rm --name auto-949287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-949287 --entrypoint /usr/bin/test -v auto-949287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:51:04.331304  801259 oci.go:107] Successfully prepared a docker volume auto-949287
	I1115 11:51:04.331376  801259 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:51:04.331387  801259 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 11:51:04.331459  801259 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-949287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 11:51:06.342180  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:08.345562  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:10.844247  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:09.651098  801259 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-949287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.319602637s)
	I1115 11:51:09.651131  801259 kic.go:203] duration metric: took 5.319740937s to extract preloaded images to volume ...
	W1115 11:51:09.651271  801259 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:51:09.651379  801259 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:51:09.770901  801259 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-949287 --name auto-949287 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-949287 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-949287 --network auto-949287 --ip 192.168.76.2 --volume auto-949287:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:51:10.287467  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Running}}
	I1115 11:51:10.314224  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:10.334347  801259 cli_runner.go:164] Run: docker exec auto-949287 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:51:10.391570  801259 oci.go:144] the created container "auto-949287" has a running status.
	I1115 11:51:10.391602  801259 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa...
	I1115 11:51:11.273884  801259 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:51:11.303012  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:11.331174  801259 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:51:11.331200  801259 kic_runner.go:114] Args: [docker exec --privileged auto-949287 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:51:11.397421  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:11.419515  801259 machine.go:94] provisionDockerMachine start ...
	I1115 11:51:11.419612  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:11.446772  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:11.447112  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:11.447129  801259 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:51:11.447844  801259 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1115 11:51:12.847029  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:15.347167  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:14.602442  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-949287
	
	I1115 11:51:14.602466  801259 ubuntu.go:182] provisioning hostname "auto-949287"
	I1115 11:51:14.602552  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:14.620936  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:14.621252  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:14.621268  801259 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-949287 && echo "auto-949287" | sudo tee /etc/hostname
	I1115 11:51:14.790461  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-949287
	
	I1115 11:51:14.790552  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:14.808936  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:14.809245  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:14.809295  801259 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-949287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-949287/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-949287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:51:14.974135  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:51:14.974163  801259 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:51:14.974185  801259 ubuntu.go:190] setting up certificates
	I1115 11:51:14.974194  801259 provision.go:84] configureAuth start
	I1115 11:51:14.974255  801259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-949287
	I1115 11:51:14.991805  801259 provision.go:143] copyHostCerts
	I1115 11:51:14.991883  801259 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:51:14.991893  801259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:51:14.991977  801259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:51:14.992075  801259 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:51:14.992084  801259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:51:14.992109  801259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:51:14.992167  801259 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:51:14.992175  801259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:51:14.992199  801259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:51:14.992250  801259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.auto-949287 san=[127.0.0.1 192.168.76.2 auto-949287 localhost minikube]
	I1115 11:51:15.272110  801259 provision.go:177] copyRemoteCerts
	I1115 11:51:15.272186  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:51:15.272226  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:15.303839  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:15.417036  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:51:15.437526  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1115 11:51:15.457434  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:51:15.478460  801259 provision.go:87] duration metric: took 504.24088ms to configureAuth
	I1115 11:51:15.478529  801259 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:51:15.478748  801259 config.go:182] Loaded profile config "auto-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:51:15.478901  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:15.497776  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:15.498104  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:15.498124  801259 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:51:15.823255  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:51:15.823277  801259 machine.go:97] duration metric: took 4.403735833s to provisionDockerMachine
	I1115 11:51:15.823287  801259 client.go:176] duration metric: took 12.504916951s to LocalClient.Create
	I1115 11:51:15.823344  801259 start.go:167] duration metric: took 12.504968774s to libmachine.API.Create "auto-949287"
	I1115 11:51:15.823353  801259 start.go:293] postStartSetup for "auto-949287" (driver="docker")
	I1115 11:51:15.823364  801259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:51:15.823468  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:51:15.823532  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:15.849752  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:15.957286  801259 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:51:15.961590  801259 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:51:15.961624  801259 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:51:15.961644  801259 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:51:15.961699  801259 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:51:15.961805  801259 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:51:15.961907  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:51:15.969411  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:51:15.987589  801259 start.go:296] duration metric: took 164.219834ms for postStartSetup
	I1115 11:51:15.988024  801259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-949287
	I1115 11:51:16.029188  801259 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/config.json ...
	I1115 11:51:16.029492  801259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:51:16.029542  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:16.048070  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:16.152062  801259 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:51:16.157396  801259 start.go:128] duration metric: took 12.843313266s to createHost
	I1115 11:51:16.157421  801259 start.go:83] releasing machines lock for "auto-949287", held for 12.843431979s
	I1115 11:51:16.157542  801259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-949287
	I1115 11:51:16.174474  801259 ssh_runner.go:195] Run: cat /version.json
	I1115 11:51:16.174562  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:16.174596  801259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:51:16.174668  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:16.198062  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:16.208298  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:16.409942  801259 ssh_runner.go:195] Run: systemctl --version
	I1115 11:51:16.416671  801259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:51:16.456511  801259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:51:16.460569  801259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:51:16.460636  801259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:51:16.491244  801259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:51:16.491269  801259 start.go:496] detecting cgroup driver to use...
	I1115 11:51:16.491311  801259 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:51:16.491365  801259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:51:16.509735  801259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:51:16.522869  801259 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:51:16.522934  801259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:51:16.541802  801259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:51:16.562351  801259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:51:16.688193  801259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:51:16.821459  801259 docker.go:234] disabling docker service ...
	I1115 11:51:16.821529  801259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:51:16.845425  801259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:51:16.860410  801259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:51:16.986322  801259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:51:17.116107  801259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:51:17.136619  801259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:51:17.154495  801259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:51:17.154616  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.165537  801259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:51:17.165609  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.175214  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.184385  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.194010  801259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:51:17.202383  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.211241  801259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.225169  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.234456  801259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:51:17.243697  801259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:51:17.252515  801259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:51:17.376992  801259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:51:17.506190  801259 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:51:17.506258  801259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:51:17.510165  801259 start.go:564] Will wait 60s for crictl version
	I1115 11:51:17.510230  801259 ssh_runner.go:195] Run: which crictl
	I1115 11:51:17.514129  801259 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:51:17.540836  801259 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:51:17.540966  801259 ssh_runner.go:195] Run: crio --version
	I1115 11:51:17.570325  801259 ssh_runner.go:195] Run: crio --version
	I1115 11:51:17.615077  801259 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:51:17.617854  801259 cli_runner.go:164] Run: docker network inspect auto-949287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:51:17.641379  801259 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:51:17.645007  801259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:51:17.655328  801259 kubeadm.go:884] updating cluster {Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:51:17.655442  801259 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:51:17.655496  801259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:51:17.690532  801259 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:51:17.690555  801259 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:51:17.690610  801259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:51:17.714904  801259 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:51:17.714928  801259 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:51:17.714935  801259 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:51:17.715024  801259 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-949287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:51:17.715107  801259 ssh_runner.go:195] Run: crio config
	I1115 11:51:17.789061  801259 cni.go:84] Creating CNI manager for ""
	I1115 11:51:17.789083  801259 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:51:17.789120  801259 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:51:17.789157  801259 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-949287 NodeName:auto-949287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:51:17.789343  801259 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-949287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:51:17.789432  801259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:51:17.798201  801259 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:51:17.798291  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:51:17.805777  801259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1115 11:51:17.817831  801259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:51:17.831255  801259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1115 11:51:17.846544  801259 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:51:17.849988  801259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:51:17.860241  801259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:51:17.981343  801259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:51:18.000070  801259 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287 for IP: 192.168.76.2
	I1115 11:51:18.000137  801259 certs.go:195] generating shared ca certs ...
	I1115 11:51:18.000170  801259 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:18.000329  801259 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:51:18.000420  801259 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:51:18.000445  801259 certs.go:257] generating profile certs ...
	I1115 11:51:18.000548  801259 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.key
	I1115 11:51:18.000586  801259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt with IP's: []
	I1115 11:51:19.151728  801259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt ...
	I1115 11:51:19.151803  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: {Name:mk1f664ba8774865b126ed1b0ba345def09c92d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:19.152024  801259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.key ...
	I1115 11:51:19.152062  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.key: {Name:mk784dfc70b94f5b7384eca3e8931e0910ae6b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:19.152184  801259 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca
	I1115 11:51:19.152225  801259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 11:51:20.180125  801259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca ...
	I1115 11:51:20.180157  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca: {Name:mkdd6d832edf6e47302d8e99273580a970badefa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.180342  801259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca ...
	I1115 11:51:20.180357  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca: {Name:mk8ef8bf7a18bf3ddee5327e29765033ac0529ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.180442  801259 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt
	I1115 11:51:20.180527  801259 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key
	I1115 11:51:20.180589  801259 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key
	I1115 11:51:20.180608  801259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt with IP's: []
	I1115 11:51:20.549922  801259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt ...
	I1115 11:51:20.549952  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt: {Name:mk501a560abbfaf19f19afcafd487e734f456053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.550133  801259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key ...
	I1115 11:51:20.550148  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key: {Name:mk00d3b792809c0072834411473de68993e6c82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.550331  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:51:20.550377  801259 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:51:20.550391  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:51:20.550415  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:51:20.550451  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:51:20.550478  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:51:20.550523  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:51:20.551159  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:51:20.572209  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:51:20.598554  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:51:20.616463  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:51:20.635216  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1115 11:51:20.652841  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:51:20.670249  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:51:20.688395  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:51:20.707525  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:51:20.724548  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:51:20.742445  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:51:20.760474  801259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:51:20.773396  801259 ssh_runner.go:195] Run: openssl version
	I1115 11:51:20.781747  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:51:20.790864  801259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:51:20.794683  801259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:51:20.794749  801259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:51:20.835450  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:51:20.847339  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:51:20.855882  801259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:51:20.859680  801259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:51:20.859746  801259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:51:20.902118  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:51:20.910582  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:51:20.919139  801259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:51:20.923481  801259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:51:20.923594  801259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:51:20.968652  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:51:20.977341  801259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:51:20.982124  801259 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:51:20.982224  801259 kubeadm.go:401] StartCluster: {Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:51:20.982327  801259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:51:20.982412  801259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:51:21.014995  801259 cri.go:89] found id: ""
	I1115 11:51:21.015117  801259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:51:21.023430  801259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:51:21.031622  801259 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:51:21.031687  801259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:51:21.039490  801259 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:51:21.039506  801259 kubeadm.go:158] found existing configuration files:
	
	I1115 11:51:21.039555  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:51:21.047481  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:51:21.047566  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:51:21.054894  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:51:21.062451  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:51:21.062518  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:51:21.069920  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:51:21.077751  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:51:21.077819  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:51:21.085479  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:51:21.093196  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:51:21.093302  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:51:21.100797  801259 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:51:21.151317  801259 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 11:51:21.151715  801259 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:51:21.178162  801259 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:51:21.178241  801259 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:51:21.178300  801259 kubeadm.go:319] OS: Linux
	I1115 11:51:21.178353  801259 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:51:21.178407  801259 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:51:21.178460  801259 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:51:21.178514  801259 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:51:21.178569  801259 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:51:21.178624  801259 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:51:21.178676  801259 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:51:21.178729  801259 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:51:21.178780  801259 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:51:21.253794  801259 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:51:21.253925  801259 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:51:21.254064  801259 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 11:51:21.262228  801259 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 11:51:17.841089  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:19.842109  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:21.268334  801259 out.go:252]   - Generating certificates and keys ...
	I1115 11:51:21.268489  801259 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:51:21.268612  801259 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:51:21.594908  801259 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:51:22.246541  801259 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 11:51:22.793108  801259 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	W1115 11:51:22.341764  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:24.341903  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:23.084587  801259 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:51:23.577940  801259 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:51:23.578843  801259 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-949287 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:51:24.290468  801259 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:51:24.290832  801259 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-949287 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:51:24.524711  801259 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:51:24.830976  801259 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:51:25.150762  801259 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:51:25.151413  801259 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:51:25.225622  801259 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:51:25.547932  801259 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 11:51:26.329008  801259 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:51:27.032040  801259 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:51:27.348875  801259 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:51:27.349540  801259 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:51:27.352587  801259 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 11:51:27.356171  801259 out.go:252]   - Booting up control plane ...
	I1115 11:51:27.356314  801259 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 11:51:27.356413  801259 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 11:51:27.357907  801259 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 11:51:27.374665  801259 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 11:51:27.375300  801259 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 11:51:27.383937  801259 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 11:51:27.384700  801259 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 11:51:27.385154  801259 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 11:51:27.528631  801259 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 11:51:27.528760  801259 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1115 11:51:26.342472  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:27.844136  797007 pod_ready.go:94] pod "coredns-66bc5c9577-m2hwn" is "Ready"
	I1115 11:51:27.844171  797007 pod_ready.go:86] duration metric: took 30.508758489s for pod "coredns-66bc5c9577-m2hwn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.855148  797007 pod_ready.go:83] waiting for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.861128  797007 pod_ready.go:94] pod "etcd-no-preload-126380" is "Ready"
	I1115 11:51:27.861161  797007 pod_ready.go:86] duration metric: took 5.981953ms for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.864017  797007 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.870043  797007 pod_ready.go:94] pod "kube-apiserver-no-preload-126380" is "Ready"
	I1115 11:51:27.870075  797007 pod_ready.go:86] duration metric: took 6.026286ms for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.872965  797007 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.040056  797007 pod_ready.go:94] pod "kube-controller-manager-no-preload-126380" is "Ready"
	I1115 11:51:28.040133  797007 pod_ready.go:86] duration metric: took 167.140432ms for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.239325  797007 pod_ready.go:83] waiting for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.638475  797007 pod_ready.go:94] pod "kube-proxy-zhsz4" is "Ready"
	I1115 11:51:28.638499  797007 pod_ready.go:86] duration metric: took 399.151088ms for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.839110  797007 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:29.238969  797007 pod_ready.go:94] pod "kube-scheduler-no-preload-126380" is "Ready"
	I1115 11:51:29.238994  797007 pod_ready.go:86] duration metric: took 399.860133ms for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:29.239007  797007 pod_ready.go:40] duration metric: took 31.907122831s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:51:29.346988  797007 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:51:29.350207  797007 out.go:179] * Done! kubectl is now configured to use "no-preload-126380" cluster and "default" namespace by default
	I1115 11:51:28.032819  801259 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.825918ms
	I1115 11:51:28.032964  801259 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 11:51:28.033051  801259 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 11:51:28.033145  801259 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 11:51:28.033228  801259 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 11:51:32.007078  801259 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.974269283s
	I1115 11:51:33.380714  801259 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.348144075s
	I1115 11:51:35.537990  801259 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.505413885s
	I1115 11:51:35.570034  801259 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:51:35.591353  801259 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:51:35.605304  801259 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:51:35.605527  801259 kubeadm.go:319] [mark-control-plane] Marking the node auto-949287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:51:35.620787  801259 kubeadm.go:319] [bootstrap-token] Using token: mjed6u.rv12ltnow4014422
	I1115 11:51:35.623854  801259 out.go:252]   - Configuring RBAC rules ...
	I1115 11:51:35.623985  801259 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 11:51:35.628902  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 11:51:35.637888  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 11:51:35.642338  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 11:51:35.650279  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 11:51:35.654369  801259 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 11:51:35.945378  801259 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 11:51:36.386723  801259 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 11:51:36.945683  801259 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 11:51:36.947231  801259 kubeadm.go:319] 
	I1115 11:51:36.947315  801259 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 11:51:36.947322  801259 kubeadm.go:319] 
	I1115 11:51:36.947404  801259 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 11:51:36.947409  801259 kubeadm.go:319] 
	I1115 11:51:36.947440  801259 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 11:51:36.947502  801259 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 11:51:36.947571  801259 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 11:51:36.947577  801259 kubeadm.go:319] 
	I1115 11:51:36.947634  801259 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 11:51:36.947638  801259 kubeadm.go:319] 
	I1115 11:51:36.947688  801259 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 11:51:36.947692  801259 kubeadm.go:319] 
	I1115 11:51:36.947747  801259 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 11:51:36.947825  801259 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 11:51:36.947898  801259 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 11:51:36.947902  801259 kubeadm.go:319] 
	I1115 11:51:36.947991  801259 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 11:51:36.948079  801259 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 11:51:36.948085  801259 kubeadm.go:319] 
	I1115 11:51:36.948172  801259 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mjed6u.rv12ltnow4014422 \
	I1115 11:51:36.948280  801259 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 11:51:36.948306  801259 kubeadm.go:319] 	--control-plane 
	I1115 11:51:36.948311  801259 kubeadm.go:319] 
	I1115 11:51:36.948399  801259 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 11:51:36.948404  801259 kubeadm.go:319] 
	I1115 11:51:36.948489  801259 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mjed6u.rv12ltnow4014422 \
	I1115 11:51:36.948595  801259 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 11:51:36.951219  801259 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 11:51:36.951468  801259 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 11:51:36.951576  801259 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 11:51:36.951598  801259 cni.go:84] Creating CNI manager for ""
	I1115 11:51:36.951607  801259 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:51:36.954770  801259 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 11:51:36.957782  801259 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 11:51:36.962191  801259 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 11:51:36.962212  801259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 11:51:36.977199  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 11:51:37.742225  801259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 11:51:37.742310  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:37.742354  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-949287 minikube.k8s.io/updated_at=2025_11_15T11_51_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=auto-949287 minikube.k8s.io/primary=true
	I1115 11:51:37.905919  801259 ops.go:34] apiserver oom_adj: -16
	I1115 11:51:37.906018  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:38.407000  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:38.906749  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:39.406161  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:39.906140  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:40.407105  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:40.496608  801259 kubeadm.go:1114] duration metric: took 2.754346298s to wait for elevateKubeSystemPrivileges
	I1115 11:51:40.496639  801259 kubeadm.go:403] duration metric: took 19.51441832s to StartCluster
	I1115 11:51:40.496657  801259 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:40.496718  801259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:51:40.497735  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:40.497989  801259 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:51:40.498085  801259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 11:51:40.498356  801259 config.go:182] Loaded profile config "auto-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:51:40.498358  801259 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:51:40.498439  801259 addons.go:70] Setting storage-provisioner=true in profile "auto-949287"
	I1115 11:51:40.498456  801259 addons.go:239] Setting addon storage-provisioner=true in "auto-949287"
	I1115 11:51:40.498481  801259 host.go:66] Checking if "auto-949287" exists ...
	I1115 11:51:40.498994  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:40.499205  801259 addons.go:70] Setting default-storageclass=true in profile "auto-949287"
	I1115 11:51:40.499223  801259 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-949287"
	I1115 11:51:40.499509  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:40.501043  801259 out.go:179] * Verifying Kubernetes components...
	I1115 11:51:40.504113  801259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:51:40.553456  801259 addons.go:239] Setting addon default-storageclass=true in "auto-949287"
	I1115 11:51:40.553508  801259 host.go:66] Checking if "auto-949287" exists ...
	I1115 11:51:40.553959  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:40.556161  801259 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:51:40.559067  801259 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:51:40.559092  801259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:51:40.559159  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:40.612064  801259 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:51:40.612093  801259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:51:40.612155  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:40.634593  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:40.646511  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:40.736688  801259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 11:51:40.807912  801259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:51:40.906592  801259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:51:40.925836  801259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:51:41.425007  801259 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 11:51:41.426751  801259 node_ready.go:35] waiting up to 15m0s for node "auto-949287" to be "Ready" ...
	I1115 11:51:41.949495  801259 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-949287" context rescaled to 1 replicas
	I1115 11:51:42.290051  801259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.364181491s)
	I1115 11:51:42.293086  801259 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 11:51:42.296944  801259 addons.go:515] duration metric: took 1.798575528s for enable addons: enabled=[default-storageclass storage-provisioner]
	
	
	==> CRI-O <==
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.464415866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.484292961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.485049555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.512774338Z" level=info msg="Created container 689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7/dashboard-metrics-scraper" id=56409253-1b97-4f41-a48c-73afe748ec3e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.518809961Z" level=info msg="Starting container: 689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d" id=b1117351-7ebd-439b-a4d6-b92a11b120eb name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.5268107Z" level=info msg="Started container" PID=1656 containerID=689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7/dashboard-metrics-scraper id=b1117351-7ebd-439b-a4d6-b92a11b120eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=4064a12a3d4e231271e3c21ef53485a3f153a661341ea7a15c9a287583b6e122
	Nov 15 11:51:32 no-preload-126380 conmon[1654]: conmon 689c1c142f54cc6c23ec <ninfo>: container 1656 exited with status 1
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.924759949Z" level=info msg="Removing container: a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806" id=783ff8bf-7912-464a-8f0e-c3bfd00d55ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.932641524Z" level=info msg="Error loading conmon cgroup of container a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806: cgroup deleted" id=783ff8bf-7912-464a-8f0e-c3bfd00d55ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.93593023Z" level=info msg="Removed container a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7/dashboard-metrics-scraper" id=783ff8bf-7912-464a-8f0e-c3bfd00d55ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.552212077Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.558115818Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.558157385Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.55818365Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.565163176Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.565193175Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.565209905Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.571373899Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.571408968Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.571430753Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.57618889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.576353519Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.576433668Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.581685554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.581817887Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	689c1c142f54c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   4064a12a3d4e2       dashboard-metrics-scraper-6ffb444bf9-9ngh7   kubernetes-dashboard
	e2a9e1acb1639       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           17 seconds ago      Running             storage-provisioner         2                   4be3faa1d3106       storage-provisioner                          kube-system
	6ba4ebfd5350b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   463d45d73c28d       kubernetes-dashboard-855c9754f9-t7kpg        kubernetes-dashboard
	27fc9b3c51b1f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   17abe705f192e       busybox                                      default
	b44ac911fc88c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   81ffe0d3a7caf       coredns-66bc5c9577-m2hwn                     kube-system
	40537f2f9d73f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           48 seconds ago      Exited              storage-provisioner         1                   4be3faa1d3106       storage-provisioner                          kube-system
	1b01f6a4fe6ad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   d72ab97f23e94       kindnet-7vrr2                                kube-system
	e54819dbda557       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   c8a0649166f49       kube-proxy-zhsz4                             kube-system
	ff27b73ca8f17       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   5cea28a31c0e0       kube-scheduler-no-preload-126380             kube-system
	16ac7fdb8e9ed       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   4bfd70ad1fd14       kube-apiserver-no-preload-126380             kube-system
	ab769dc54851c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   5f606be6e2ae9       etcd-no-preload-126380                       kube-system
	57c368e28f36e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   82a80ea3b5900       kube-controller-manager-no-preload-126380    kube-system
	
	
	==> coredns [b44ac911fc88c96498566ce772c1348e18d74da236c4b629cf05e9fa0d9d4ebe] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51407 - 33488 "HINFO IN 6593363120524801419.3718986220948722277. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011901137s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-126380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-126380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=no-preload-126380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_49_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:49:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-126380
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:51:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-126380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                a22ae12e-ce80-4a2c-98ad-3a3e8aeb26aa
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-m2hwn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-no-preload-126380                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-7vrr2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-126380              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-126380     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-zhsz4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-126380              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9ngh7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-t7kpg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 48s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-126380 event: Registered Node no-preload-126380 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-126380 status is now: NodeReady
	  Normal   Starting                 60s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                  node-controller  Node no-preload-126380 event: Registered Node no-preload-126380 in Controller
	
	
	==> dmesg <==
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	[Nov15 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.578289] overlayfs: idmapped layers are currently not supported
	[  +6.063974] overlayfs: idmapped layers are currently not supported
	[Nov15 11:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab769dc54851c40c74b065b75a3f67d4f8d0132a1f1e065c9daa886d8665fdc7] <==
	{"level":"warn","ts":"2025-11-15T11:50:52.538028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.574982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.618930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.657934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.678686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.713477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.747669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.773669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.805492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.840069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.866064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.890424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.906505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.925584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.939354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.956942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.001392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.045345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.059048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.083916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.100080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.125344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.147530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.157651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.223352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:51:45 up  3:34,  0 user,  load average: 4.38, 3.78, 3.10
	Linux no-preload-126380 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b01f6a4fe6ad8a2d4e70a06ee23f3e1ea000ca7c3a2d3c66dd46c7a32a460a4] <==
	I1115 11:50:56.357047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:50:56.357312       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:50:56.357440       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:50:56.357452       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:50:56.357462       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:50:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:50:56.551019       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:50:56.552210       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:50:56.552244       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:50:56.552641       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:51:26.552037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:51:26.552679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:51:26.553947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:51:26.601820       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 11:51:28.152486       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:51:28.152520       1 metrics.go:72] Registering metrics
	I1115 11:51:28.154010       1 controller.go:711] "Syncing nftables rules"
	I1115 11:51:36.551168       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:51:36.551956       1 main.go:301] handling current node
	
	
	==> kube-apiserver [16ac7fdb8e9ed235613c8255c801b9a65efe815d89103579d0f55fa48408628f] <==
	I1115 11:50:54.794148       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:50:54.821538       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:50:54.829166       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:50:54.829174       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:50:54.829181       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:50:54.837594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:50:54.842356       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1115 11:50:54.842368       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:50:54.853335       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:50:54.861437       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:50:54.879306       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:50:54.879331       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:50:54.879845       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:50:54.889486       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:50:55.101733       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:50:55.367618       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:50:56.154304       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:50:56.503026       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:50:56.628054       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:50:56.658290       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:50:57.004192       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.64.168"}
	I1115 11:50:57.084512       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.242.39"}
	I1115 11:50:58.978654       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:50:59.230694       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:50:59.294898       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [57c368e28f36eee195d648e761727c0670d2cfaa223fa5be99062e847379937c] <==
	I1115 11:50:58.822413       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:50:58.822796       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:50:58.823097       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:50:58.840553       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 11:50:58.842502       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:50:58.844899       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 11:50:58.851139       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:50:58.857386       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 11:50:58.868396       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 11:50:58.871109       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:50:58.871300       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 11:50:58.871342       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:50:58.876538       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:50:58.882654       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 11:50:58.882867       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 11:50:58.882978       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-126380"
	I1115 11:50:58.883052       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 11:50:58.883210       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:50:58.883248       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:50:58.883276       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:50:58.884155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:50:58.884311       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:50:58.884327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:50:58.890168       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:50:58.895195       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [e54819dbda5570f90b23e33d6f1b1635479dd9063dc6c9be60485bc7fd5e933c] <==
	I1115 11:50:56.570667       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:50:56.870549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:50:57.023278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:50:57.023768       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:50:57.023863       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:50:57.065646       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:50:57.065768       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:50:57.081459       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:50:57.081854       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:50:57.082398       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:57.083757       1 config.go:200] "Starting service config controller"
	I1115 11:50:57.083872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:50:57.083948       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:50:57.083982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:50:57.084020       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:50:57.084045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:50:57.084816       1 config.go:309] "Starting node config controller"
	I1115 11:50:57.084878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:50:57.084910       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:50:57.184991       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:50:57.185039       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:50:57.185076       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ff27b73ca8f1765a9b5e411c5c5a50ecdc283b3f9ac1d25c020e18cc04187039] <==
	I1115 11:50:54.679004       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:54.698453       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:50:54.698571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:54.698589       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:54.698605       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 11:50:54.732060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:50:54.786526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:50:54.786627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:50:54.799649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:50:54.799770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:50:54.799854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:50:54.799922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:50:54.799983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:50:54.800087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:50:54.800147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:50:54.800215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:50:54.800270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:50:54.800319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:50:54.800403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:50:54.800468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:50:54.800530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:50:54.800589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:50:54.800741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:50:54.800851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1115 11:50:56.199015       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:50:55 no-preload-126380 kubelet[766]: I1115 11:50:55.366073     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64878ec8-f351-4aa1-b2a9-7a6b5c705fcd-lib-modules\") pod \"kube-proxy-zhsz4\" (UID: \"64878ec8-f351-4aa1-b2a9-7a6b5c705fcd\") " pod="kube-system/kube-proxy-zhsz4"
	Nov 15 11:50:55 no-preload-126380 kubelet[766]: I1115 11:50:55.425062     766 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:50:55 no-preload-126380 kubelet[766]: W1115 11:50:55.965054     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/crio-17abe705f192ee2c15fe65858d67eb95b00c38c12b9e6e6f1241be5b0a36ece3 WatchSource:0}: Error finding container 17abe705f192ee2c15fe65858d67eb95b00c38c12b9e6e6f1241be5b0a36ece3: Status 404 returned error can't find the container with id 17abe705f192ee2c15fe65858d67eb95b00c38c12b9e6e6f1241be5b0a36ece3
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.533508     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55ce0ad5-85a0-411a-9874-9b8c8e1b8595-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-t7kpg\" (UID: \"55ce0ad5-85a0-411a-9874-9b8c8e1b8595\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t7kpg"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.534049     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnbr\" (UniqueName: \"kubernetes.io/projected/bd9c4a81-fff7-4b1c-aa6d-921aca2695bd-kube-api-access-8lnbr\") pod \"dashboard-metrics-scraper-6ffb444bf9-9ngh7\" (UID: \"bd9c4a81-fff7-4b1c-aa6d-921aca2695bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.534156     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd9c4a81-fff7-4b1c-aa6d-921aca2695bd-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-9ngh7\" (UID: \"bd9c4a81-fff7-4b1c-aa6d-921aca2695bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.534271     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9dp\" (UniqueName: \"kubernetes.io/projected/55ce0ad5-85a0-411a-9874-9b8c8e1b8595-kube-api-access-lm9dp\") pod \"kubernetes-dashboard-855c9754f9-t7kpg\" (UID: \"55ce0ad5-85a0-411a-9874-9b8c8e1b8595\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t7kpg"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: W1115 11:50:59.812971     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/crio-463d45d73c28d1be9e0aa733802f3de678d11a85fc3a1e8be9228af083d9db8f WatchSource:0}: Error finding container 463d45d73c28d1be9e0aa733802f3de678d11a85fc3a1e8be9228af083d9db8f: Status 404 returned error can't find the container with id 463d45d73c28d1be9e0aa733802f3de678d11a85fc3a1e8be9228af083d9db8f
	Nov 15 11:51:13 no-preload-126380 kubelet[766]: I1115 11:51:13.859426     766 scope.go:117] "RemoveContainer" containerID="98275348e9da976080fda2c5fb632e5a93e656574e63406cf24a86a80829d778"
	Nov 15 11:51:13 no-preload-126380 kubelet[766]: I1115 11:51:13.889643     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t7kpg" podStartSLOduration=7.742315534 podStartE2EDuration="14.888944815s" podCreationTimestamp="2025-11-15 11:50:59 +0000 UTC" firstStartedPulling="2025-11-15 11:50:59.818611789 +0000 UTC m=+14.809847686" lastFinishedPulling="2025-11-15 11:51:06.965241054 +0000 UTC m=+21.956476967" observedRunningTime="2025-11-15 11:51:07.869260125 +0000 UTC m=+22.860496022" watchObservedRunningTime="2025-11-15 11:51:13.888944815 +0000 UTC m=+28.880180712"
	Nov 15 11:51:14 no-preload-126380 kubelet[766]: I1115 11:51:14.863070     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:14 no-preload-126380 kubelet[766]: E1115 11:51:14.863966     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:14 no-preload-126380 kubelet[766]: I1115 11:51:14.864086     766 scope.go:117] "RemoveContainer" containerID="98275348e9da976080fda2c5fb632e5a93e656574e63406cf24a86a80829d778"
	Nov 15 11:51:19 no-preload-126380 kubelet[766]: I1115 11:51:19.765126     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:19 no-preload-126380 kubelet[766]: E1115 11:51:19.765813     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:26 no-preload-126380 kubelet[766]: I1115 11:51:26.895962     766 scope.go:117] "RemoveContainer" containerID="40537f2f9d73f24a3fc919e58c7be26902dcd9503e84419051fec20be5efa20d"
	Nov 15 11:51:32 no-preload-126380 kubelet[766]: I1115 11:51:32.460742     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:32 no-preload-126380 kubelet[766]: I1115 11:51:32.923267     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:33 no-preload-126380 kubelet[766]: I1115 11:51:33.926865     766 scope.go:117] "RemoveContainer" containerID="689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d"
	Nov 15 11:51:33 no-preload-126380 kubelet[766]: E1115 11:51:33.927018     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:39 no-preload-126380 kubelet[766]: I1115 11:51:39.766991     766 scope.go:117] "RemoveContainer" containerID="689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d"
	Nov 15 11:51:39 no-preload-126380 kubelet[766]: E1115 11:51:39.768033     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:42 no-preload-126380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:51:42 no-preload-126380 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:51:42 no-preload-126380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6ba4ebfd5350b614116a1165ef7e8d2c6becd498c8a4d4af5dbdf487b9e37cb9] <==
	2025/11/15 11:51:07 Starting overwatch
	2025/11/15 11:51:07 Using namespace: kubernetes-dashboard
	2025/11/15 11:51:07 Using in-cluster config to connect to apiserver
	2025/11/15 11:51:07 Using secret token for csrf signing
	2025/11/15 11:51:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:51:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:51:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 11:51:07 Generating JWE encryption key
	2025/11/15 11:51:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:51:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:51:07 Initializing JWE encryption key from synchronized object
	2025/11/15 11:51:07 Creating in-cluster Sidecar client
	2025/11/15 11:51:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:51:07 Serving insecurely on HTTP port: 9090
	2025/11/15 11:51:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [40537f2f9d73f24a3fc919e58c7be26902dcd9503e84419051fec20be5efa20d] <==
	I1115 11:50:56.452631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:51:26.455289       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e2a9e1acb1639da928e33bedd0d68edc4ebd14bd8de4a13663336f08668a6608] <==
	I1115 11:51:26.994516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:51:27.021879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:51:27.022048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 11:51:27.026890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:30.482403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:34.743046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:38.341700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:41.396841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:44.422764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:44.437436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:51:44.437606       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:51:44.437999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1a6883d-ab3f-4fde-8358-8e509502c15b", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-126380_b39710d2-d292-4866-994b-dd49a09ee3ca became leader
	I1115 11:51:44.438034       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-126380_b39710d2-d292-4866-994b-dd49a09ee3ca!
	W1115 11:51:44.450339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:44.454129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:51:44.539108       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-126380_b39710d2-d292-4866-994b-dd49a09ee3ca!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-126380 -n no-preload-126380
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-126380 -n no-preload-126380: exit status 2 (381.412835ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-126380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-126380
helpers_test.go:243: (dbg) docker inspect no-preload-126380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf",
	        "Created": "2025-11-15T11:49:07.318214347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 797181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T11:50:36.587194536Z",
	            "FinishedAt": "2025-11-15T11:50:35.622894519Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/hosts",
	        "LogPath": "/var/lib/docker/containers/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf-json.log",
	        "Name": "/no-preload-126380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-126380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-126380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf",
	                "LowerDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6-init/diff:/var/lib/docker/overlay2/f6a91dd271e1917c4f9a20566f427d8fc6680ca638407ee57e2d6675f62b946d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9848c74ea17203b8050bbe97a4da3abb8cf001cde7edd4cbb584ff0a4c7cd5e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-126380",
	                "Source": "/var/lib/docker/volumes/no-preload-126380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-126380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-126380",
	                "name.minikube.sigs.k8s.io": "no-preload-126380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19dae6f2d10522c657b37740de12557e6daf7ba316e392a49e313aa6e27d8b69",
	            "SandboxKey": "/var/run/docker/netns/19dae6f2d105",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-126380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:23:42:7f:f8:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1b9530ecfade28bc16fd6c10682aa7624f38192683bf3f788bebea9faf0c447",
	                    "EndpointID": "6bbe20faf637cc2aea4b1df5689ccc9d99a0dab10f79735066371fa56915d30e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-126380",
	                        "0b66713a6755"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380: exit status 2 (406.289898ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-126380 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-126380 logs -n 25: (1.335608311s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-769461 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:48 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p default-k8s-diff-port-769461                                                                                                                                                                                                               │ default-k8s-diff-port-769461 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p disable-driver-mounts-200933                                                                                                                                                                                                               │ disable-driver-mounts-200933 │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ image   │ embed-certs-404149 image list --format=json                                                                                                                                                                                                   │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ pause   │ -p embed-certs-404149 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │                     │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ delete  │ -p embed-certs-404149                                                                                                                                                                                                                         │ embed-certs-404149           │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:49 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:49 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p no-preload-126380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p no-preload-126380 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable metrics-server -p newest-cni-600818 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-600818 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-600818 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ addons  │ enable dashboard -p no-preload-126380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ start   │ -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:51 UTC │
	│ image   │ newest-cni-600818 image list --format=json                                                                                                                                                                                                    │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │ 15 Nov 25 11:50 UTC │
	│ pause   │ -p newest-cni-600818 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:50 UTC │                     │
	│ delete  │ -p newest-cni-600818                                                                                                                                                                                                                          │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │ 15 Nov 25 11:51 UTC │
	│ delete  │ -p newest-cni-600818                                                                                                                                                                                                                          │ newest-cni-600818            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │ 15 Nov 25 11:51 UTC │
	│ start   │ -p auto-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-949287                  │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │                     │
	│ image   │ no-preload-126380 image list --format=json                                                                                                                                                                                                    │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │ 15 Nov 25 11:51 UTC │
	│ pause   │ -p no-preload-126380 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-126380            │ jenkins │ v1.37.0 │ 15 Nov 25 11:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 11:51:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 11:51:02.979062  801259 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:51:02.979266  801259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:51:02.979293  801259 out.go:374] Setting ErrFile to fd 2...
	I1115 11:51:02.979311  801259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:51:02.979579  801259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:51:02.980027  801259 out.go:368] Setting JSON to false
	I1115 11:51:02.981074  801259 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12814,"bootTime":1763194649,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:51:02.981174  801259 start.go:143] virtualization:  
	I1115 11:51:02.985729  801259 out.go:179] * [auto-949287] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:51:02.989244  801259 notify.go:221] Checking for updates...
	I1115 11:51:02.993175  801259 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:51:02.999817  801259 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:51:03.004010  801259 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:51:03.007131  801259 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:51:03.010179  801259 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:51:03.013220  801259 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:51:03.016777  801259 config.go:182] Loaded profile config "no-preload-126380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:51:03.016919  801259 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:51:03.058209  801259 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:51:03.058376  801259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:51:03.174939  801259 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:51:03.159786564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:51:03.175058  801259 docker.go:319] overlay module found
	I1115 11:51:03.178465  801259 out.go:179] * Using the docker driver based on user configuration
	I1115 11:51:03.181522  801259 start.go:309] selected driver: docker
	I1115 11:51:03.181547  801259 start.go:930] validating driver "docker" against <nil>
	I1115 11:51:03.181569  801259 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:51:03.182349  801259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:51:03.269920  801259 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 11:51:03.260312178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:51:03.270089  801259 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 11:51:03.270326  801259 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 11:51:03.274333  801259 out.go:179] * Using Docker driver with root privileges
	I1115 11:51:03.277415  801259 cni.go:84] Creating CNI manager for ""
	I1115 11:51:03.277491  801259 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:51:03.277508  801259 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 11:51:03.277598  801259 start.go:353] cluster config:
	{Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1115 11:51:03.280983  801259 out.go:179] * Starting "auto-949287" primary control-plane node in "auto-949287" cluster
	I1115 11:51:03.283966  801259 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 11:51:03.287023  801259 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 11:51:03.290028  801259 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:51:03.290084  801259 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 11:51:03.290094  801259 cache.go:65] Caching tarball of preloaded images
	I1115 11:51:03.290205  801259 preload.go:238] Found /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 11:51:03.290221  801259 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 11:51:03.290333  801259 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/config.json ...
	I1115 11:51:03.290357  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/config.json: {Name:mkccd1588c4b8b37ad192edf4ddc2068a4018ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:03.290500  801259 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 11:51:03.313818  801259 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 11:51:03.313839  801259 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 11:51:03.313851  801259 cache.go:243] Successfully downloaded all kic artifacts
	I1115 11:51:03.313878  801259 start.go:360] acquireMachinesLock for auto-949287: {Name:mkaf6ea366b01fa2d774c787f18844043a225252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 11:51:03.313981  801259 start.go:364] duration metric: took 86.286µs to acquireMachinesLock for "auto-949287"
	I1115 11:51:03.314006  801259 start.go:93] Provisioning new machine with config: &{Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:51:03.314069  801259 start.go:125] createHost starting for "" (driver="docker")
	W1115 11:51:01.841118  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:03.874023  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:03.318093  801259 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 11:51:03.318334  801259 start.go:159] libmachine.API.Create for "auto-949287" (driver="docker")
	I1115 11:51:03.318364  801259 client.go:173] LocalClient.Create starting
	I1115 11:51:03.318427  801259 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem
	I1115 11:51:03.318459  801259 main.go:143] libmachine: Decoding PEM data...
	I1115 11:51:03.318482  801259 main.go:143] libmachine: Parsing certificate...
	I1115 11:51:03.318537  801259 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem
	I1115 11:51:03.318559  801259 main.go:143] libmachine: Decoding PEM data...
	I1115 11:51:03.318571  801259 main.go:143] libmachine: Parsing certificate...
	I1115 11:51:03.318931  801259 cli_runner.go:164] Run: docker network inspect auto-949287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 11:51:03.343401  801259 cli_runner.go:211] docker network inspect auto-949287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 11:51:03.344940  801259 network_create.go:284] running [docker network inspect auto-949287] to gather additional debugging logs...
	I1115 11:51:03.344968  801259 cli_runner.go:164] Run: docker network inspect auto-949287
	W1115 11:51:03.369804  801259 cli_runner.go:211] docker network inspect auto-949287 returned with exit code 1
	I1115 11:51:03.369842  801259 network_create.go:287] error running [docker network inspect auto-949287]: docker network inspect auto-949287: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-949287 not found
	I1115 11:51:03.369855  801259 network_create.go:289] output of [docker network inspect auto-949287]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-949287 not found
	
	** /stderr **
	I1115 11:51:03.369979  801259 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:51:03.396331  801259 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
	I1115 11:51:03.396657  801259 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5353e0ad5224 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:f4:9a:df:ce:52} reservation:<nil>}
	I1115 11:51:03.397063  801259 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cf2ab118f937 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:c9:22:19:21:27} reservation:<nil>}
	I1115 11:51:03.397494  801259 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1cbb0}
	I1115 11:51:03.397523  801259 network_create.go:124] attempt to create docker network auto-949287 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 11:51:03.397577  801259 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-949287 auto-949287
	I1115 11:51:03.471017  801259 network_create.go:108] docker network auto-949287 192.168.76.0/24 created
	I1115 11:51:03.471046  801259 kic.go:121] calculated static IP "192.168.76.2" for the "auto-949287" container
	I1115 11:51:03.471116  801259 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 11:51:03.487425  801259 cli_runner.go:164] Run: docker volume create auto-949287 --label name.minikube.sigs.k8s.io=auto-949287 --label created_by.minikube.sigs.k8s.io=true
	I1115 11:51:03.510277  801259 oci.go:103] Successfully created a docker volume auto-949287
	I1115 11:51:03.510371  801259 cli_runner.go:164] Run: docker run --rm --name auto-949287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-949287 --entrypoint /usr/bin/test -v auto-949287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 11:51:04.331304  801259 oci.go:107] Successfully prepared a docker volume auto-949287
	I1115 11:51:04.331376  801259 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:51:04.331387  801259 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 11:51:04.331459  801259 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-949287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 11:51:06.342180  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:08.345562  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:10.844247  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:09.651098  801259 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-949287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.319602637s)
	I1115 11:51:09.651131  801259 kic.go:203] duration metric: took 5.319740937s to extract preloaded images to volume ...
	W1115 11:51:09.651271  801259 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 11:51:09.651379  801259 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 11:51:09.770901  801259 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-949287 --name auto-949287 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-949287 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-949287 --network auto-949287 --ip 192.168.76.2 --volume auto-949287:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 11:51:10.287467  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Running}}
	I1115 11:51:10.314224  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:10.334347  801259 cli_runner.go:164] Run: docker exec auto-949287 stat /var/lib/dpkg/alternatives/iptables
	I1115 11:51:10.391570  801259 oci.go:144] the created container "auto-949287" has a running status.
	I1115 11:51:10.391602  801259 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa...
	I1115 11:51:11.273884  801259 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 11:51:11.303012  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:11.331174  801259 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 11:51:11.331200  801259 kic_runner.go:114] Args: [docker exec --privileged auto-949287 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 11:51:11.397421  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:11.419515  801259 machine.go:94] provisionDockerMachine start ...
	I1115 11:51:11.419612  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:11.446772  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:11.447112  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:11.447129  801259 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 11:51:11.447844  801259 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1115 11:51:12.847029  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:15.347167  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:14.602442  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-949287
	
	I1115 11:51:14.602466  801259 ubuntu.go:182] provisioning hostname "auto-949287"
	I1115 11:51:14.602552  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:14.620936  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:14.621252  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:14.621268  801259 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-949287 && echo "auto-949287" | sudo tee /etc/hostname
	I1115 11:51:14.790461  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-949287
	
	I1115 11:51:14.790552  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:14.808936  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:14.809245  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:14.809295  801259 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-949287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-949287/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-949287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 11:51:14.974135  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 11:51:14.974163  801259 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21894-584713/.minikube CaCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21894-584713/.minikube}
	I1115 11:51:14.974185  801259 ubuntu.go:190] setting up certificates
	I1115 11:51:14.974194  801259 provision.go:84] configureAuth start
	I1115 11:51:14.974255  801259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-949287
	I1115 11:51:14.991805  801259 provision.go:143] copyHostCerts
	I1115 11:51:14.991883  801259 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem, removing ...
	I1115 11:51:14.991893  801259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem
	I1115 11:51:14.991977  801259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/key.pem (1675 bytes)
	I1115 11:51:14.992075  801259 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem, removing ...
	I1115 11:51:14.992084  801259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem
	I1115 11:51:14.992109  801259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/ca.pem (1078 bytes)
	I1115 11:51:14.992167  801259 exec_runner.go:144] found /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem, removing ...
	I1115 11:51:14.992175  801259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem
	I1115 11:51:14.992199  801259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21894-584713/.minikube/cert.pem (1123 bytes)
	I1115 11:51:14.992250  801259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem org=jenkins.auto-949287 san=[127.0.0.1 192.168.76.2 auto-949287 localhost minikube]
	I1115 11:51:15.272110  801259 provision.go:177] copyRemoteCerts
	I1115 11:51:15.272186  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 11:51:15.272226  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:15.303839  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:15.417036  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1115 11:51:15.437526  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1115 11:51:15.457434  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 11:51:15.478460  801259 provision.go:87] duration metric: took 504.24088ms to configureAuth
	I1115 11:51:15.478529  801259 ubuntu.go:206] setting minikube options for container-runtime
	I1115 11:51:15.478748  801259 config.go:182] Loaded profile config "auto-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:51:15.478901  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:15.497776  801259 main.go:143] libmachine: Using SSH client type: native
	I1115 11:51:15.498104  801259 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1115 11:51:15.498124  801259 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 11:51:15.823255  801259 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 11:51:15.823277  801259 machine.go:97] duration metric: took 4.403735833s to provisionDockerMachine
	I1115 11:51:15.823287  801259 client.go:176] duration metric: took 12.504916951s to LocalClient.Create
	I1115 11:51:15.823344  801259 start.go:167] duration metric: took 12.504968774s to libmachine.API.Create "auto-949287"
	I1115 11:51:15.823353  801259 start.go:293] postStartSetup for "auto-949287" (driver="docker")
	I1115 11:51:15.823364  801259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 11:51:15.823468  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 11:51:15.823532  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:15.849752  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:15.957286  801259 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 11:51:15.961590  801259 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 11:51:15.961624  801259 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 11:51:15.961644  801259 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/addons for local assets ...
	I1115 11:51:15.961699  801259 filesync.go:126] Scanning /home/jenkins/minikube-integration/21894-584713/.minikube/files for local assets ...
	I1115 11:51:15.961805  801259 filesync.go:149] local asset: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem -> 5865612.pem in /etc/ssl/certs
	I1115 11:51:15.961907  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 11:51:15.969411  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:51:15.987589  801259 start.go:296] duration metric: took 164.219834ms for postStartSetup
	I1115 11:51:15.988024  801259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-949287
	I1115 11:51:16.029188  801259 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/config.json ...
	I1115 11:51:16.029492  801259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:51:16.029542  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:16.048070  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:16.152062  801259 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 11:51:16.157396  801259 start.go:128] duration metric: took 12.843313266s to createHost
	I1115 11:51:16.157421  801259 start.go:83] releasing machines lock for "auto-949287", held for 12.843431979s
	I1115 11:51:16.157542  801259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-949287
	I1115 11:51:16.174474  801259 ssh_runner.go:195] Run: cat /version.json
	I1115 11:51:16.174562  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:16.174596  801259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 11:51:16.174668  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:16.198062  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:16.208298  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:16.409942  801259 ssh_runner.go:195] Run: systemctl --version
	I1115 11:51:16.416671  801259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 11:51:16.456511  801259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 11:51:16.460569  801259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 11:51:16.460636  801259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 11:51:16.491244  801259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 11:51:16.491269  801259 start.go:496] detecting cgroup driver to use...
	I1115 11:51:16.491311  801259 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 11:51:16.491365  801259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 11:51:16.509735  801259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 11:51:16.522869  801259 docker.go:218] disabling cri-docker service (if available) ...
	I1115 11:51:16.522934  801259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 11:51:16.541802  801259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 11:51:16.562351  801259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 11:51:16.688193  801259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 11:51:16.821459  801259 docker.go:234] disabling docker service ...
	I1115 11:51:16.821529  801259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 11:51:16.845425  801259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 11:51:16.860410  801259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 11:51:16.986322  801259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 11:51:17.116107  801259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 11:51:17.136619  801259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 11:51:17.154495  801259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 11:51:17.154616  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.165537  801259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 11:51:17.165609  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.175214  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.184385  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.194010  801259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 11:51:17.202383  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.211241  801259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.225169  801259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 11:51:17.234456  801259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 11:51:17.243697  801259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 11:51:17.252515  801259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:51:17.376992  801259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 11:51:17.506190  801259 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 11:51:17.506258  801259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 11:51:17.510165  801259 start.go:564] Will wait 60s for crictl version
	I1115 11:51:17.510230  801259 ssh_runner.go:195] Run: which crictl
	I1115 11:51:17.514129  801259 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 11:51:17.540836  801259 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 11:51:17.540966  801259 ssh_runner.go:195] Run: crio --version
	I1115 11:51:17.570325  801259 ssh_runner.go:195] Run: crio --version
	I1115 11:51:17.615077  801259 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 11:51:17.617854  801259 cli_runner.go:164] Run: docker network inspect auto-949287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 11:51:17.641379  801259 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 11:51:17.645007  801259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:51:17.655328  801259 kubeadm.go:884] updating cluster {Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 11:51:17.655442  801259 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 11:51:17.655496  801259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:51:17.690532  801259 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:51:17.690555  801259 crio.go:433] Images already preloaded, skipping extraction
	I1115 11:51:17.690610  801259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 11:51:17.714904  801259 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 11:51:17.714928  801259 cache_images.go:86] Images are preloaded, skipping loading
	I1115 11:51:17.714935  801259 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 11:51:17.715024  801259 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-949287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 11:51:17.715107  801259 ssh_runner.go:195] Run: crio config
	I1115 11:51:17.789061  801259 cni.go:84] Creating CNI manager for ""
	I1115 11:51:17.789083  801259 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:51:17.789120  801259 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 11:51:17.789157  801259 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-949287 NodeName:auto-949287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 11:51:17.789343  801259 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-949287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 11:51:17.789432  801259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 11:51:17.798201  801259 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 11:51:17.798291  801259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 11:51:17.805777  801259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1115 11:51:17.817831  801259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 11:51:17.831255  801259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1115 11:51:17.846544  801259 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 11:51:17.849988  801259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 11:51:17.860241  801259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:51:17.981343  801259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:51:18.000070  801259 certs.go:69] Setting up /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287 for IP: 192.168.76.2
	I1115 11:51:18.000137  801259 certs.go:195] generating shared ca certs ...
	I1115 11:51:18.000170  801259 certs.go:227] acquiring lock for ca certs: {Name:mk9e3fa3258bc24a66b5afdd3db0cf2d6342d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:18.000329  801259 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key
	I1115 11:51:18.000420  801259 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key
	I1115 11:51:18.000445  801259 certs.go:257] generating profile certs ...
	I1115 11:51:18.000548  801259 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.key
	I1115 11:51:18.000586  801259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt with IP's: []
	I1115 11:51:19.151728  801259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt ...
	I1115 11:51:19.151803  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: {Name:mk1f664ba8774865b126ed1b0ba345def09c92d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:19.152024  801259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.key ...
	I1115 11:51:19.152062  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.key: {Name:mk784dfc70b94f5b7384eca3e8931e0910ae6b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:19.152184  801259 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca
	I1115 11:51:19.152225  801259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 11:51:20.180125  801259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca ...
	I1115 11:51:20.180157  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca: {Name:mkdd6d832edf6e47302d8e99273580a970badefa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.180342  801259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca ...
	I1115 11:51:20.180357  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca: {Name:mk8ef8bf7a18bf3ddee5327e29765033ac0529ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.180442  801259 certs.go:382] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt.1493e8ca -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt
	I1115 11:51:20.180527  801259 certs.go:386] copying /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key.1493e8ca -> /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key
	I1115 11:51:20.180589  801259 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key
	I1115 11:51:20.180608  801259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt with IP's: []
	I1115 11:51:20.549922  801259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt ...
	I1115 11:51:20.549952  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt: {Name:mk501a560abbfaf19f19afcafd487e734f456053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.550133  801259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key ...
	I1115 11:51:20.550148  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key: {Name:mk00d3b792809c0072834411473de68993e6c82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:20.550331  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem (1338 bytes)
	W1115 11:51:20.550377  801259 certs.go:480] ignoring /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561_empty.pem, impossibly tiny 0 bytes
	I1115 11:51:20.550391  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 11:51:20.550415  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/ca.pem (1078 bytes)
	I1115 11:51:20.550451  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/cert.pem (1123 bytes)
	I1115 11:51:20.550478  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/certs/key.pem (1675 bytes)
	I1115 11:51:20.550523  801259 certs.go:484] found cert: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem (1708 bytes)
	I1115 11:51:20.551159  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 11:51:20.572209  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 11:51:20.598554  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 11:51:20.616463  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 11:51:20.635216  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1115 11:51:20.652841  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 11:51:20.670249  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 11:51:20.688395  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 11:51:20.707525  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/ssl/certs/5865612.pem --> /usr/share/ca-certificates/5865612.pem (1708 bytes)
	I1115 11:51:20.724548  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 11:51:20.742445  801259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21894-584713/.minikube/certs/586561.pem --> /usr/share/ca-certificates/586561.pem (1338 bytes)
	I1115 11:51:20.760474  801259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 11:51:20.773396  801259 ssh_runner.go:195] Run: openssl version
	I1115 11:51:20.781747  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5865612.pem && ln -fs /usr/share/ca-certificates/5865612.pem /etc/ssl/certs/5865612.pem"
	I1115 11:51:20.790864  801259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5865612.pem
	I1115 11:51:20.794683  801259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 10:38 /usr/share/ca-certificates/5865612.pem
	I1115 11:51:20.794749  801259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5865612.pem
	I1115 11:51:20.835450  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5865612.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 11:51:20.847339  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 11:51:20.855882  801259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:51:20.859680  801259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:51:20.859746  801259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 11:51:20.902118  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 11:51:20.910582  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586561.pem && ln -fs /usr/share/ca-certificates/586561.pem /etc/ssl/certs/586561.pem"
	I1115 11:51:20.919139  801259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586561.pem
	I1115 11:51:20.923481  801259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 10:38 /usr/share/ca-certificates/586561.pem
	I1115 11:51:20.923594  801259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586561.pem
	I1115 11:51:20.968652  801259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586561.pem /etc/ssl/certs/51391683.0"
	I1115 11:51:20.977341  801259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 11:51:20.982124  801259 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 11:51:20.982224  801259 kubeadm.go:401] StartCluster: {Name:auto-949287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-949287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 11:51:20.982327  801259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 11:51:20.982412  801259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 11:51:21.014995  801259 cri.go:89] found id: ""
	I1115 11:51:21.015117  801259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 11:51:21.023430  801259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 11:51:21.031622  801259 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 11:51:21.031687  801259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 11:51:21.039490  801259 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 11:51:21.039506  801259 kubeadm.go:158] found existing configuration files:
	
	I1115 11:51:21.039555  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 11:51:21.047481  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 11:51:21.047566  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 11:51:21.054894  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 11:51:21.062451  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 11:51:21.062518  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 11:51:21.069920  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 11:51:21.077751  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 11:51:21.077819  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 11:51:21.085479  801259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 11:51:21.093196  801259 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 11:51:21.093302  801259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 11:51:21.100797  801259 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 11:51:21.151317  801259 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 11:51:21.151715  801259 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 11:51:21.178162  801259 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 11:51:21.178241  801259 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 11:51:21.178300  801259 kubeadm.go:319] OS: Linux
	I1115 11:51:21.178353  801259 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 11:51:21.178407  801259 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 11:51:21.178460  801259 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 11:51:21.178514  801259 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 11:51:21.178569  801259 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 11:51:21.178624  801259 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 11:51:21.178676  801259 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 11:51:21.178729  801259 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 11:51:21.178780  801259 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 11:51:21.253794  801259 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 11:51:21.253925  801259 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 11:51:21.254064  801259 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 11:51:21.262228  801259 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 11:51:17.841089  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:19.842109  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:21.268334  801259 out.go:252]   - Generating certificates and keys ...
	I1115 11:51:21.268489  801259 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 11:51:21.268612  801259 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 11:51:21.594908  801259 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 11:51:22.246541  801259 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 11:51:22.793108  801259 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	W1115 11:51:22.341764  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	W1115 11:51:24.341903  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:23.084587  801259 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 11:51:23.577940  801259 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 11:51:23.578843  801259 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-949287 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:51:24.290468  801259 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 11:51:24.290832  801259 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-949287 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 11:51:24.524711  801259 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 11:51:24.830976  801259 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 11:51:25.150762  801259 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 11:51:25.151413  801259 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 11:51:25.225622  801259 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 11:51:25.547932  801259 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 11:51:26.329008  801259 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 11:51:27.032040  801259 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 11:51:27.348875  801259 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 11:51:27.349540  801259 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 11:51:27.352587  801259 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 11:51:27.356171  801259 out.go:252]   - Booting up control plane ...
	I1115 11:51:27.356314  801259 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 11:51:27.356413  801259 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 11:51:27.357907  801259 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 11:51:27.374665  801259 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 11:51:27.375300  801259 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 11:51:27.383937  801259 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 11:51:27.384700  801259 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 11:51:27.385154  801259 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 11:51:27.528631  801259 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 11:51:27.528760  801259 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1115 11:51:26.342472  797007 pod_ready.go:104] pod "coredns-66bc5c9577-m2hwn" is not "Ready", error: <nil>
	I1115 11:51:27.844136  797007 pod_ready.go:94] pod "coredns-66bc5c9577-m2hwn" is "Ready"
	I1115 11:51:27.844171  797007 pod_ready.go:86] duration metric: took 30.508758489s for pod "coredns-66bc5c9577-m2hwn" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.855148  797007 pod_ready.go:83] waiting for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.861128  797007 pod_ready.go:94] pod "etcd-no-preload-126380" is "Ready"
	I1115 11:51:27.861161  797007 pod_ready.go:86] duration metric: took 5.981953ms for pod "etcd-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.864017  797007 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.870043  797007 pod_ready.go:94] pod "kube-apiserver-no-preload-126380" is "Ready"
	I1115 11:51:27.870075  797007 pod_ready.go:86] duration metric: took 6.026286ms for pod "kube-apiserver-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:27.872965  797007 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.040056  797007 pod_ready.go:94] pod "kube-controller-manager-no-preload-126380" is "Ready"
	I1115 11:51:28.040133  797007 pod_ready.go:86] duration metric: took 167.140432ms for pod "kube-controller-manager-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.239325  797007 pod_ready.go:83] waiting for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.638475  797007 pod_ready.go:94] pod "kube-proxy-zhsz4" is "Ready"
	I1115 11:51:28.638499  797007 pod_ready.go:86] duration metric: took 399.151088ms for pod "kube-proxy-zhsz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:28.839110  797007 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:29.238969  797007 pod_ready.go:94] pod "kube-scheduler-no-preload-126380" is "Ready"
	I1115 11:51:29.238994  797007 pod_ready.go:86] duration metric: took 399.860133ms for pod "kube-scheduler-no-preload-126380" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 11:51:29.239007  797007 pod_ready.go:40] duration metric: took 31.907122831s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 11:51:29.346988  797007 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 11:51:29.350207  797007 out.go:179] * Done! kubectl is now configured to use "no-preload-126380" cluster and "default" namespace by default
	I1115 11:51:28.032819  801259 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.825918ms
	I1115 11:51:28.032964  801259 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 11:51:28.033051  801259 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 11:51:28.033145  801259 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 11:51:28.033228  801259 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 11:51:32.007078  801259 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.974269283s
	I1115 11:51:33.380714  801259 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.348144075s
	I1115 11:51:35.537990  801259 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.505413885s
	I1115 11:51:35.570034  801259 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 11:51:35.591353  801259 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 11:51:35.605304  801259 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 11:51:35.605527  801259 kubeadm.go:319] [mark-control-plane] Marking the node auto-949287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 11:51:35.620787  801259 kubeadm.go:319] [bootstrap-token] Using token: mjed6u.rv12ltnow4014422
	I1115 11:51:35.623854  801259 out.go:252]   - Configuring RBAC rules ...
	I1115 11:51:35.623985  801259 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 11:51:35.628902  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 11:51:35.637888  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 11:51:35.642338  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 11:51:35.650279  801259 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 11:51:35.654369  801259 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 11:51:35.945378  801259 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 11:51:36.386723  801259 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 11:51:36.945683  801259 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 11:51:36.947231  801259 kubeadm.go:319] 
	I1115 11:51:36.947315  801259 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 11:51:36.947322  801259 kubeadm.go:319] 
	I1115 11:51:36.947404  801259 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 11:51:36.947409  801259 kubeadm.go:319] 
	I1115 11:51:36.947440  801259 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 11:51:36.947502  801259 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 11:51:36.947571  801259 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 11:51:36.947577  801259 kubeadm.go:319] 
	I1115 11:51:36.947634  801259 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 11:51:36.947638  801259 kubeadm.go:319] 
	I1115 11:51:36.947688  801259 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 11:51:36.947692  801259 kubeadm.go:319] 
	I1115 11:51:36.947747  801259 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 11:51:36.947825  801259 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 11:51:36.947898  801259 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 11:51:36.947902  801259 kubeadm.go:319] 
	I1115 11:51:36.947991  801259 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 11:51:36.948079  801259 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 11:51:36.948085  801259 kubeadm.go:319] 
	I1115 11:51:36.948172  801259 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mjed6u.rv12ltnow4014422 \
	I1115 11:51:36.948280  801259 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a \
	I1115 11:51:36.948306  801259 kubeadm.go:319] 	--control-plane 
	I1115 11:51:36.948311  801259 kubeadm.go:319] 
	I1115 11:51:36.948399  801259 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 11:51:36.948404  801259 kubeadm.go:319] 
	I1115 11:51:36.948489  801259 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mjed6u.rv12ltnow4014422 \
	I1115 11:51:36.948595  801259 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b1040393eada86fab96747f4b429d894d6129adf107451497f2ee55617b5a54a 
	I1115 11:51:36.951219  801259 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 11:51:36.951468  801259 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 11:51:36.951576  801259 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 11:51:36.951598  801259 cni.go:84] Creating CNI manager for ""
	I1115 11:51:36.951607  801259 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 11:51:36.954770  801259 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 11:51:36.957782  801259 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 11:51:36.962191  801259 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 11:51:36.962212  801259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 11:51:36.977199  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 11:51:37.742225  801259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 11:51:37.742310  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:37.742354  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-949287 minikube.k8s.io/updated_at=2025_11_15T11_51_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510 minikube.k8s.io/name=auto-949287 minikube.k8s.io/primary=true
	I1115 11:51:37.905919  801259 ops.go:34] apiserver oom_adj: -16
	I1115 11:51:37.906018  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:38.407000  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:38.906749  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:39.406161  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:39.906140  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:40.407105  801259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 11:51:40.496608  801259 kubeadm.go:1114] duration metric: took 2.754346298s to wait for elevateKubeSystemPrivileges
	I1115 11:51:40.496639  801259 kubeadm.go:403] duration metric: took 19.51441832s to StartCluster
	I1115 11:51:40.496657  801259 settings.go:142] acquiring lock: {Name:mk2d09436a5408529b747823bfcf9abd7a6d5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:40.496718  801259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:51:40.497735  801259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/kubeconfig: {Name:mk426361e0314a25df9b90afd11fa4f0173f5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 11:51:40.497989  801259 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 11:51:40.498085  801259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 11:51:40.498356  801259 config.go:182] Loaded profile config "auto-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:51:40.498358  801259 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 11:51:40.498439  801259 addons.go:70] Setting storage-provisioner=true in profile "auto-949287"
	I1115 11:51:40.498456  801259 addons.go:239] Setting addon storage-provisioner=true in "auto-949287"
	I1115 11:51:40.498481  801259 host.go:66] Checking if "auto-949287" exists ...
	I1115 11:51:40.498994  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:40.499205  801259 addons.go:70] Setting default-storageclass=true in profile "auto-949287"
	I1115 11:51:40.499223  801259 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-949287"
	I1115 11:51:40.499509  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:40.501043  801259 out.go:179] * Verifying Kubernetes components...
	I1115 11:51:40.504113  801259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 11:51:40.553456  801259 addons.go:239] Setting addon default-storageclass=true in "auto-949287"
	I1115 11:51:40.553508  801259 host.go:66] Checking if "auto-949287" exists ...
	I1115 11:51:40.553959  801259 cli_runner.go:164] Run: docker container inspect auto-949287 --format={{.State.Status}}
	I1115 11:51:40.556161  801259 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 11:51:40.559067  801259 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:51:40.559092  801259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 11:51:40.559159  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:40.612064  801259 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 11:51:40.612093  801259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 11:51:40.612155  801259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-949287
	I1115 11:51:40.634593  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:40.646511  801259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/auto-949287/id_rsa Username:docker}
	I1115 11:51:40.736688  801259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 11:51:40.807912  801259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 11:51:40.906592  801259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 11:51:40.925836  801259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 11:51:41.425007  801259 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 11:51:41.426751  801259 node_ready.go:35] waiting up to 15m0s for node "auto-949287" to be "Ready" ...
	I1115 11:51:41.949495  801259 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-949287" context rescaled to 1 replicas
	I1115 11:51:42.290051  801259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.364181491s)
	I1115 11:51:42.293086  801259 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 11:51:42.296944  801259 addons.go:515] duration metric: took 1.798575528s for enable addons: enabled=[default-storageclass storage-provisioner]
	
	
	==> CRI-O <==
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.464415866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.484292961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.485049555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.512774338Z" level=info msg="Created container 689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7/dashboard-metrics-scraper" id=56409253-1b97-4f41-a48c-73afe748ec3e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.518809961Z" level=info msg="Starting container: 689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d" id=b1117351-7ebd-439b-a4d6-b92a11b120eb name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.5268107Z" level=info msg="Started container" PID=1656 containerID=689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7/dashboard-metrics-scraper id=b1117351-7ebd-439b-a4d6-b92a11b120eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=4064a12a3d4e231271e3c21ef53485a3f153a661341ea7a15c9a287583b6e122
	Nov 15 11:51:32 no-preload-126380 conmon[1654]: conmon 689c1c142f54cc6c23ec <ninfo>: container 1656 exited with status 1
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.924759949Z" level=info msg="Removing container: a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806" id=783ff8bf-7912-464a-8f0e-c3bfd00d55ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.932641524Z" level=info msg="Error loading conmon cgroup of container a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806: cgroup deleted" id=783ff8bf-7912-464a-8f0e-c3bfd00d55ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:51:32 no-preload-126380 crio[650]: time="2025-11-15T11:51:32.93593023Z" level=info msg="Removed container a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7/dashboard-metrics-scraper" id=783ff8bf-7912-464a-8f0e-c3bfd00d55ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.552212077Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.558115818Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.558157385Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.55818365Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.565163176Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.565193175Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.565209905Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.571373899Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.571408968Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.571430753Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.57618889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.576353519Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.576433668Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.581685554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 11:51:36 no-preload-126380 crio[650]: time="2025-11-15T11:51:36.581817887Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	689c1c142f54c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   2                   4064a12a3d4e2       dashboard-metrics-scraper-6ffb444bf9-9ngh7   kubernetes-dashboard
	e2a9e1acb1639       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago       Running             storage-provisioner         2                   4be3faa1d3106       storage-provisioner                          kube-system
	6ba4ebfd5350b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   463d45d73c28d       kubernetes-dashboard-855c9754f9-t7kpg        kubernetes-dashboard
	27fc9b3c51b1f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   17abe705f192e       busybox                                      default
	b44ac911fc88c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   81ffe0d3a7caf       coredns-66bc5c9577-m2hwn                     kube-system
	40537f2f9d73f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago       Exited              storage-provisioner         1                   4be3faa1d3106       storage-provisioner                          kube-system
	1b01f6a4fe6ad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   d72ab97f23e94       kindnet-7vrr2                                kube-system
	e54819dbda557       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   c8a0649166f49       kube-proxy-zhsz4                             kube-system
	ff27b73ca8f17       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5cea28a31c0e0       kube-scheduler-no-preload-126380             kube-system
	16ac7fdb8e9ed       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4bfd70ad1fd14       kube-apiserver-no-preload-126380             kube-system
	ab769dc54851c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   5f606be6e2ae9       etcd-no-preload-126380                       kube-system
	57c368e28f36e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   82a80ea3b5900       kube-controller-manager-no-preload-126380    kube-system
	
	
	==> coredns [b44ac911fc88c96498566ce772c1348e18d74da236c4b629cf05e9fa0d9d4ebe] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51407 - 33488 "HINFO IN 6593363120524801419.3718986220948722277. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011901137s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-126380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-126380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=no-preload-126380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T11_49_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 11:49:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-126380
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 11:51:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:49:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 11:51:35 +0000   Sat, 15 Nov 2025 11:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-126380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                a22ae12e-ce80-4a2c-98ad-3a3e8aeb26aa
	  Boot ID:                    d5499758-03fb-4131-870a-6c50901d5286
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-m2hwn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-126380                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-7vrr2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-126380              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-126380     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-zhsz4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-126380              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9ngh7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-t7kpg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 111s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   Starting                 119s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    118s                 kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s                 kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  118s                 kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           115s                 node-controller  Node no-preload-126380 event: Registered Node no-preload-126380 in Controller
	  Normal   NodeReady                98s                  kubelet          Node no-preload-126380 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-126380 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-126380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-126380 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                  node-controller  Node no-preload-126380 event: Registered Node no-preload-126380 in Controller
	
	
	==> dmesg <==
	[Nov15 11:29] overlayfs: idmapped layers are currently not supported
	[ +18.078832] overlayfs: idmapped layers are currently not supported
	[Nov15 11:30] overlayfs: idmapped layers are currently not supported
	[ +27.078852] overlayfs: idmapped layers are currently not supported
	[Nov15 11:32] overlayfs: idmapped layers are currently not supported
	[Nov15 11:33] overlayfs: idmapped layers are currently not supported
	[Nov15 11:35] overlayfs: idmapped layers are currently not supported
	[Nov15 11:36] overlayfs: idmapped layers are currently not supported
	[Nov15 11:37] overlayfs: idmapped layers are currently not supported
	[Nov15 11:39] overlayfs: idmapped layers are currently not supported
	[Nov15 11:41] overlayfs: idmapped layers are currently not supported
	[Nov15 11:42] overlayfs: idmapped layers are currently not supported
	[ +38.149986] overlayfs: idmapped layers are currently not supported
	[Nov15 11:43] overlayfs: idmapped layers are currently not supported
	[ +42.515815] overlayfs: idmapped layers are currently not supported
	[Nov15 11:44] overlayfs: idmapped layers are currently not supported
	[Nov15 11:46] overlayfs: idmapped layers are currently not supported
	[Nov15 11:47] overlayfs: idmapped layers are currently not supported
	[ +42.475391] overlayfs: idmapped layers are currently not supported
	[Nov15 11:48] overlayfs: idmapped layers are currently not supported
	[Nov15 11:49] overlayfs: idmapped layers are currently not supported
	[Nov15 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.578289] overlayfs: idmapped layers are currently not supported
	[  +6.063974] overlayfs: idmapped layers are currently not supported
	[Nov15 11:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab769dc54851c40c74b065b75a3f67d4f8d0132a1f1e065c9daa886d8665fdc7] <==
	{"level":"warn","ts":"2025-11-15T11:50:52.538028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.574982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.618930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.657934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.678686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.713477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.747669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.773669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.805492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.840069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.866064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.890424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.906505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.925584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.939354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:52.956942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.001392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.045345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.059048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.083916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.100080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.125344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.147530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.157651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T11:50:53.223352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:51:47 up  3:34,  0 user,  load average: 4.38, 3.78, 3.10
	Linux no-preload-126380 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b01f6a4fe6ad8a2d4e70a06ee23f3e1ea000ca7c3a2d3c66dd46c7a32a460a4] <==
	I1115 11:50:56.357047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 11:50:56.357312       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 11:50:56.357440       1 main.go:148] setting mtu 1500 for CNI 
	I1115 11:50:56.357452       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 11:50:56.357462       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T11:50:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 11:50:56.551019       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 11:50:56.552210       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 11:50:56.552244       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 11:50:56.552641       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 11:51:26.552037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 11:51:26.552679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 11:51:26.553947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 11:51:26.601820       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 11:51:28.152486       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 11:51:28.152520       1 metrics.go:72] Registering metrics
	I1115 11:51:28.154010       1 controller.go:711] "Syncing nftables rules"
	I1115 11:51:36.551168       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:51:36.551956       1 main.go:301] handling current node
	I1115 11:51:46.553015       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 11:51:46.553049       1 main.go:301] handling current node
	
	
	==> kube-apiserver [16ac7fdb8e9ed235613c8255c801b9a65efe815d89103579d0f55fa48408628f] <==
	I1115 11:50:54.794148       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 11:50:54.821538       1 aggregator.go:171] initial CRD sync complete...
	I1115 11:50:54.829166       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 11:50:54.829174       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 11:50:54.829181       1 cache.go:39] Caches are synced for autoregister controller
	I1115 11:50:54.837594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 11:50:54.842356       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1115 11:50:54.842368       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 11:50:54.853335       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 11:50:54.861437       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 11:50:54.879306       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 11:50:54.879331       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 11:50:54.879845       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 11:50:54.889486       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 11:50:55.101733       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 11:50:55.367618       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 11:50:56.154304       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 11:50:56.503026       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 11:50:56.628054       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 11:50:56.658290       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 11:50:57.004192       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.64.168"}
	I1115 11:50:57.084512       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.242.39"}
	I1115 11:50:58.978654       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 11:50:59.230694       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 11:50:59.294898       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [57c368e28f36eee195d648e761727c0670d2cfaa223fa5be99062e847379937c] <==
	I1115 11:50:58.822413       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 11:50:58.822796       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 11:50:58.823097       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:50:58.840553       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 11:50:58.842502       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 11:50:58.844899       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 11:50:58.851139       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 11:50:58.857386       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 11:50:58.868396       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 11:50:58.871109       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 11:50:58.871300       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 11:50:58.871342       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 11:50:58.876538       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 11:50:58.882654       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 11:50:58.882867       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 11:50:58.882978       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-126380"
	I1115 11:50:58.883052       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 11:50:58.883210       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 11:50:58.883248       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 11:50:58.883276       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 11:50:58.884155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:50:58.884311       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 11:50:58.884327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 11:50:58.890168       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 11:50:58.895195       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [e54819dbda5570f90b23e33d6f1b1635479dd9063dc6c9be60485bc7fd5e933c] <==
	I1115 11:50:56.570667       1 server_linux.go:53] "Using iptables proxy"
	I1115 11:50:56.870549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 11:50:57.023278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 11:50:57.023768       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 11:50:57.023863       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 11:50:57.065646       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 11:50:57.065768       1 server_linux.go:132] "Using iptables Proxier"
	I1115 11:50:57.081459       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 11:50:57.081854       1 server.go:527] "Version info" version="v1.34.1"
	I1115 11:50:57.082398       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:57.083757       1 config.go:200] "Starting service config controller"
	I1115 11:50:57.083872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 11:50:57.083948       1 config.go:106] "Starting endpoint slice config controller"
	I1115 11:50:57.083982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 11:50:57.084020       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 11:50:57.084045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 11:50:57.084816       1 config.go:309] "Starting node config controller"
	I1115 11:50:57.084878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 11:50:57.084910       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 11:50:57.184991       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 11:50:57.185039       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 11:50:57.185076       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ff27b73ca8f1765a9b5e411c5c5a50ecdc283b3f9ac1d25c020e18cc04187039] <==
	I1115 11:50:54.679004       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 11:50:54.698453       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 11:50:54.698571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:54.698589       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 11:50:54.698605       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 11:50:54.732060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 11:50:54.786526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 11:50:54.786627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 11:50:54.799649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 11:50:54.799770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 11:50:54.799854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 11:50:54.799922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 11:50:54.799983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 11:50:54.800087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 11:50:54.800147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 11:50:54.800215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 11:50:54.800270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 11:50:54.800319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 11:50:54.800403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 11:50:54.800468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 11:50:54.800530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 11:50:54.800589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 11:50:54.800741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 11:50:54.800851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1115 11:50:56.199015       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 11:50:55 no-preload-126380 kubelet[766]: I1115 11:50:55.366073     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64878ec8-f351-4aa1-b2a9-7a6b5c705fcd-lib-modules\") pod \"kube-proxy-zhsz4\" (UID: \"64878ec8-f351-4aa1-b2a9-7a6b5c705fcd\") " pod="kube-system/kube-proxy-zhsz4"
	Nov 15 11:50:55 no-preload-126380 kubelet[766]: I1115 11:50:55.425062     766 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 11:50:55 no-preload-126380 kubelet[766]: W1115 11:50:55.965054     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/crio-17abe705f192ee2c15fe65858d67eb95b00c38c12b9e6e6f1241be5b0a36ece3 WatchSource:0}: Error finding container 17abe705f192ee2c15fe65858d67eb95b00c38c12b9e6e6f1241be5b0a36ece3: Status 404 returned error can't find the container with id 17abe705f192ee2c15fe65858d67eb95b00c38c12b9e6e6f1241be5b0a36ece3
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.533508     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55ce0ad5-85a0-411a-9874-9b8c8e1b8595-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-t7kpg\" (UID: \"55ce0ad5-85a0-411a-9874-9b8c8e1b8595\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t7kpg"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.534049     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnbr\" (UniqueName: \"kubernetes.io/projected/bd9c4a81-fff7-4b1c-aa6d-921aca2695bd-kube-api-access-8lnbr\") pod \"dashboard-metrics-scraper-6ffb444bf9-9ngh7\" (UID: \"bd9c4a81-fff7-4b1c-aa6d-921aca2695bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.534156     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd9c4a81-fff7-4b1c-aa6d-921aca2695bd-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-9ngh7\" (UID: \"bd9c4a81-fff7-4b1c-aa6d-921aca2695bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: I1115 11:50:59.534271     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9dp\" (UniqueName: \"kubernetes.io/projected/55ce0ad5-85a0-411a-9874-9b8c8e1b8595-kube-api-access-lm9dp\") pod \"kubernetes-dashboard-855c9754f9-t7kpg\" (UID: \"55ce0ad5-85a0-411a-9874-9b8c8e1b8595\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t7kpg"
	Nov 15 11:50:59 no-preload-126380 kubelet[766]: W1115 11:50:59.812971     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b66713a675540b073bebc71283c7c1eb6f613f579438e2db23c4a49021b95cf/crio-463d45d73c28d1be9e0aa733802f3de678d11a85fc3a1e8be9228af083d9db8f WatchSource:0}: Error finding container 463d45d73c28d1be9e0aa733802f3de678d11a85fc3a1e8be9228af083d9db8f: Status 404 returned error can't find the container with id 463d45d73c28d1be9e0aa733802f3de678d11a85fc3a1e8be9228af083d9db8f
	Nov 15 11:51:13 no-preload-126380 kubelet[766]: I1115 11:51:13.859426     766 scope.go:117] "RemoveContainer" containerID="98275348e9da976080fda2c5fb632e5a93e656574e63406cf24a86a80829d778"
	Nov 15 11:51:13 no-preload-126380 kubelet[766]: I1115 11:51:13.889643     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t7kpg" podStartSLOduration=7.742315534 podStartE2EDuration="14.888944815s" podCreationTimestamp="2025-11-15 11:50:59 +0000 UTC" firstStartedPulling="2025-11-15 11:50:59.818611789 +0000 UTC m=+14.809847686" lastFinishedPulling="2025-11-15 11:51:06.965241054 +0000 UTC m=+21.956476967" observedRunningTime="2025-11-15 11:51:07.869260125 +0000 UTC m=+22.860496022" watchObservedRunningTime="2025-11-15 11:51:13.888944815 +0000 UTC m=+28.880180712"
	Nov 15 11:51:14 no-preload-126380 kubelet[766]: I1115 11:51:14.863070     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:14 no-preload-126380 kubelet[766]: E1115 11:51:14.863966     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:14 no-preload-126380 kubelet[766]: I1115 11:51:14.864086     766 scope.go:117] "RemoveContainer" containerID="98275348e9da976080fda2c5fb632e5a93e656574e63406cf24a86a80829d778"
	Nov 15 11:51:19 no-preload-126380 kubelet[766]: I1115 11:51:19.765126     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:19 no-preload-126380 kubelet[766]: E1115 11:51:19.765813     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:26 no-preload-126380 kubelet[766]: I1115 11:51:26.895962     766 scope.go:117] "RemoveContainer" containerID="40537f2f9d73f24a3fc919e58c7be26902dcd9503e84419051fec20be5efa20d"
	Nov 15 11:51:32 no-preload-126380 kubelet[766]: I1115 11:51:32.460742     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:32 no-preload-126380 kubelet[766]: I1115 11:51:32.923267     766 scope.go:117] "RemoveContainer" containerID="a4fcb529527c180e478a9215d20db8b24c3d5584a01637ee0ccbf7556136d806"
	Nov 15 11:51:33 no-preload-126380 kubelet[766]: I1115 11:51:33.926865     766 scope.go:117] "RemoveContainer" containerID="689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d"
	Nov 15 11:51:33 no-preload-126380 kubelet[766]: E1115 11:51:33.927018     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:39 no-preload-126380 kubelet[766]: I1115 11:51:39.766991     766 scope.go:117] "RemoveContainer" containerID="689c1c142f54cc6c23ec1e8d76b15c4c34360afb6cd644cb8eb6be69304ef50d"
	Nov 15 11:51:39 no-preload-126380 kubelet[766]: E1115 11:51:39.768033     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9ngh7_kubernetes-dashboard(bd9c4a81-fff7-4b1c-aa6d-921aca2695bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9ngh7" podUID="bd9c4a81-fff7-4b1c-aa6d-921aca2695bd"
	Nov 15 11:51:42 no-preload-126380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 11:51:42 no-preload-126380 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 11:51:42 no-preload-126380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6ba4ebfd5350b614116a1165ef7e8d2c6becd498c8a4d4af5dbdf487b9e37cb9] <==
	2025/11/15 11:51:07 Starting overwatch
	2025/11/15 11:51:07 Using namespace: kubernetes-dashboard
	2025/11/15 11:51:07 Using in-cluster config to connect to apiserver
	2025/11/15 11:51:07 Using secret token for csrf signing
	2025/11/15 11:51:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 11:51:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 11:51:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 11:51:07 Generating JWE encryption key
	2025/11/15 11:51:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 11:51:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 11:51:07 Initializing JWE encryption key from synchronized object
	2025/11/15 11:51:07 Creating in-cluster Sidecar client
	2025/11/15 11:51:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 11:51:07 Serving insecurely on HTTP port: 9090
	2025/11/15 11:51:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [40537f2f9d73f24a3fc919e58c7be26902dcd9503e84419051fec20be5efa20d] <==
	I1115 11:50:56.452631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 11:51:26.455289       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e2a9e1acb1639da928e33bedd0d68edc4ebd14bd8de4a13663336f08668a6608] <==
	I1115 11:51:26.994516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 11:51:27.021879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 11:51:27.022048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 11:51:27.026890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:30.482403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:34.743046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:38.341700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:41.396841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:44.422764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:44.437436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:51:44.437606       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 11:51:44.437999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1a6883d-ab3f-4fde-8358-8e509502c15b", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-126380_b39710d2-d292-4866-994b-dd49a09ee3ca became leader
	I1115 11:51:44.438034       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-126380_b39710d2-d292-4866-994b-dd49a09ee3ca!
	W1115 11:51:44.450339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:44.454129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 11:51:44.539108       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-126380_b39710d2-d292-4866-994b-dd49a09ee3ca!
	W1115 11:51:46.458007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 11:51:46.466823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-126380 -n no-preload-126380
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-126380 -n no-preload-126380: exit status 2 (364.489435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-126380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.03s)
E1115 11:57:27.144091  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:27.150535  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:27.161995  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:27.183638  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:27.225029  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:27.306634  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:27.468987  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:27.790587  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:28.432668  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:29.714043  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:32.276324  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:37.398526  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:47.640697  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:49.234361  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:57:56.404381  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:08.122197  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/auto-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:18.358529  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:18.364937  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:18.376424  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:18.397924  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:18.439395  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:18.520770  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:18.682486  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:19.004821  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:58:19.647067  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (256/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.45
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.08
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.15
18 TestDownloadOnly/v1.34.1/DeleteAll 0.33
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 162.08
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.8
48 TestAddons/StoppedEnableDisable 12.39
49 TestCertOptions 43.88
50 TestCertExpiration 253.03
52 TestForceSystemdFlag 38.52
53 TestForceSystemdEnv 43.6
58 TestErrorSpam/setup 33.44
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.06
61 TestErrorSpam/pause 7.05
62 TestErrorSpam/unpause 5.17
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.44
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.71
70 TestFunctional/serial/KubeContext 0.08
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.52
75 TestFunctional/serial/CacheCmd/cache/add_local 1.13
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 41.03
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.57
87 TestFunctional/serial/InvalidService 3.94
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 11.01
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.06
98 TestFunctional/parallel/AddonsCmd 0.22
99 TestFunctional/parallel/PersistentVolumeClaim 25.78
101 TestFunctional/parallel/SSHCmd 0.76
102 TestFunctional/parallel/CpCmd 2.53
104 TestFunctional/parallel/FileSync 0.72
105 TestFunctional/parallel/CertSync 2.17
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.35
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.39
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 7.89
130 TestFunctional/parallel/MountCmd/specific-port 1.97
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.31
132 TestFunctional/parallel/ServiceCmd/List 0.6
133 TestFunctional/parallel/ServiceCmd/JSONOutput 1.44
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 0.97
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.15
144 TestFunctional/parallel/ImageCommands/Setup 0.69
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 217.82
163 TestMultiControlPlane/serial/DeployApp 44.14
164 TestMultiControlPlane/serial/PingHostFromPods 1.52
165 TestMultiControlPlane/serial/AddWorkerNode 60.08
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
168 TestMultiControlPlane/serial/CopyFile 20.32
169 TestMultiControlPlane/serial/StopSecondaryNode 12.87
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.22
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.74
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
176 TestMultiControlPlane/serial/StopCluster 36.55
179 TestMultiControlPlane/serial/AddSecondaryNode 79.31
185 TestJSONOutput/start/Command 51.05
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.79
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 39.07
211 TestKicCustomNetwork/use_default_bridge_network 38.41
212 TestKicExistingNetwork 38.22
213 TestKicCustomSubnet 35.42
214 TestKicStaticIP 40.13
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 72.04
219 TestMountStart/serial/StartWithMountFirst 8.77
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.35
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.76
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.31
226 TestMountStart/serial/RestartStopped 8.26
227 TestMountStart/serial/VerifyMountPostStop 0.31
230 TestMultiNode/serial/FreshStart2Nodes 140.07
231 TestMultiNode/serial/DeployApp2Nodes 5.44
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 56.68
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.68
237 TestMultiNode/serial/StopNode 2.47
238 TestMultiNode/serial/StartAfterStop 8.46
239 TestMultiNode/serial/RestartKeepsNodes 72.74
240 TestMultiNode/serial/DeleteNode 5.65
241 TestMultiNode/serial/StopMultiNode 24.11
242 TestMultiNode/serial/RestartMultiNode 51.23
243 TestMultiNode/serial/ValidateNameConflict 37.86
248 TestPreload 132.47
250 TestScheduledStopUnix 111.54
253 TestInsufficientStorage 14.05
254 TestRunningBinaryUpgrade 55.76
256 TestKubernetesUpgrade 367.04
257 TestMissingContainerUpgrade 117.25
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 48.06
261 TestNoKubernetes/serial/StartWithStopK8s 10.34
262 TestNoKubernetes/serial/Start 8.91
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
265 TestNoKubernetes/serial/ProfileList 0.73
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 7.49
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
269 TestStoppedBinaryUpgrade/Setup 0.79
270 TestStoppedBinaryUpgrade/Upgrade 57.29
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
280 TestPause/serial/Start 80.8
281 TestPause/serial/SecondStartNoReconfiguration 28.99
290 TestNetworkPlugins/group/false 5.1
295 TestStartStop/group/old-k8s-version/serial/FirstStart 62.5
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.46
298 TestStartStop/group/old-k8s-version/serial/Stop 12.03
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 48.42
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.47
308 TestStartStop/group/embed-certs/serial/FirstStart 80.64
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.12
314 TestStartStop/group/embed-certs/serial/DeployApp 9.41
316 TestStartStop/group/embed-certs/serial/Stop 12.02
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 51.96
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
321 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
324 TestStartStop/group/no-preload/serial/FirstStart 66.16
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.16
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
330 TestStartStop/group/newest-cni/serial/FirstStart 41.19
331 TestStartStop/group/no-preload/serial/DeployApp 8.43
333 TestStartStop/group/no-preload/serial/Stop 12.16
334 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/Stop 1.33
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
338 TestStartStop/group/newest-cni/serial/SecondStart 19.58
339 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
340 TestStartStop/group/no-preload/serial/SecondStart 53.71
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.42
345 TestNetworkPlugins/group/auto/Start 83.47
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
350 TestNetworkPlugins/group/kindnet/Start 86.86
351 TestNetworkPlugins/group/auto/KubeletFlags 0.41
352 TestNetworkPlugins/group/auto/NetCatPod 10.39
353 TestNetworkPlugins/group/auto/DNS 0.19
354 TestNetworkPlugins/group/auto/Localhost 0.16
355 TestNetworkPlugins/group/auto/HairPin 0.14
356 TestNetworkPlugins/group/calico/Start 85.11
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.39
360 TestNetworkPlugins/group/kindnet/DNS 0.18
361 TestNetworkPlugins/group/kindnet/Localhost 0.14
362 TestNetworkPlugins/group/kindnet/HairPin 0.14
363 TestNetworkPlugins/group/custom-flannel/Start 57.87
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.45
366 TestNetworkPlugins/group/calico/NetCatPod 11.41
367 TestNetworkPlugins/group/calico/DNS 0.18
368 TestNetworkPlugins/group/calico/Localhost 0.15
369 TestNetworkPlugins/group/calico/HairPin 0.16
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.41
372 TestNetworkPlugins/group/enable-default-cni/Start 85.12
373 TestNetworkPlugins/group/custom-flannel/DNS 0.37
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
376 TestNetworkPlugins/group/flannel/Start 58.15
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
384 TestNetworkPlugins/group/flannel/NetCatPod 14.26
385 TestNetworkPlugins/group/flannel/DNS 0.21
386 TestNetworkPlugins/group/flannel/Localhost 0.19
387 TestNetworkPlugins/group/flannel/HairPin 0.19
388 TestNetworkPlugins/group/bridge/Start 76.43
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
390 TestNetworkPlugins/group/bridge/NetCatPod 10.26
391 TestNetworkPlugins/group/bridge/DNS 0.14
392 TestNetworkPlugins/group/bridge/Localhost 0.12
393 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (5.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-148158 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-148158 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.445275914s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1115 10:31:29.125227  586561 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1115 10:31:29.125314  586561 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-148158
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-148158: exit status 85 (95.758802ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-148158 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-148158 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:31:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:31:23.727835  586566 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:31:23.728030  586566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:23.728041  586566 out.go:374] Setting ErrFile to fd 2...
	I1115 10:31:23.728048  586566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:23.728379  586566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	W1115 10:31:23.728517  586566 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21894-584713/.minikube/config/config.json: open /home/jenkins/minikube-integration/21894-584713/.minikube/config/config.json: no such file or directory
	I1115 10:31:23.729090  586566 out.go:368] Setting JSON to true
	I1115 10:31:23.729965  586566 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8035,"bootTime":1763194649,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:31:23.730038  586566 start.go:143] virtualization:  
	I1115 10:31:23.734300  586566 out.go:99] [download-only-148158] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1115 10:31:23.734476  586566 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball: no such file or directory
	I1115 10:31:23.734556  586566 notify.go:221] Checking for updates...
	I1115 10:31:23.737523  586566 out.go:171] MINIKUBE_LOCATION=21894
	I1115 10:31:23.740677  586566 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:31:23.743736  586566 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:31:23.747421  586566 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:31:23.750381  586566 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1115 10:31:23.756052  586566 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 10:31:23.756335  586566 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:31:23.786096  586566 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:31:23.786240  586566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:23.847429  586566 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-15 10:31:23.834566282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:23.847539  586566 docker.go:319] overlay module found
	I1115 10:31:23.850528  586566 out.go:99] Using the docker driver based on user configuration
	I1115 10:31:23.850571  586566 start.go:309] selected driver: docker
	I1115 10:31:23.850578  586566 start.go:930] validating driver "docker" against <nil>
	I1115 10:31:23.850693  586566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:23.914735  586566 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-15 10:31:23.905923575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:23.914889  586566 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:31:23.915152  586566 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1115 10:31:23.915328  586566 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 10:31:23.918371  586566 out.go:171] Using Docker driver with root privileges
	I1115 10:31:23.921455  586566 cni.go:84] Creating CNI manager for ""
	I1115 10:31:23.921525  586566 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:31:23.921538  586566 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:31:23.921618  586566 start.go:353] cluster config:
	{Name:download-only-148158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-148158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:31:23.924575  586566 out.go:99] Starting "download-only-148158" primary control-plane node in "download-only-148158" cluster
	I1115 10:31:23.924601  586566 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:31:23.927543  586566 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:31:23.927599  586566 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:31:23.927780  586566 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:31:23.943583  586566 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 10:31:23.943755  586566 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 10:31:23.943853  586566 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 10:31:23.989602  586566 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 10:31:23.989633  586566 cache.go:65] Caching tarball of preloaded images
	I1115 10:31:23.989793  586566 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:31:23.993122  586566 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1115 10:31:23.993156  586566 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1115 10:31:24.077971  586566 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1115 10:31:24.078101  586566 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 10:31:28.440233  586566 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 10:31:28.440763  586566 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/download-only-148158/config.json ...
	I1115 10:31:28.440806  586566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/download-only-148158/config.json: {Name:mkf21b4cb19133735fb2ec4725b78054bf57e395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:28.441101  586566 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:31:28.441330  586566 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-148158 host does not exist
	  To start a cluster, run: "minikube start -p download-only-148158"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-148158
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-948757 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-948757 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.082515828s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1115 10:31:34.657828  586561 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 10:31:34.657862  586561 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-948757
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-948757: exit status 85 (145.432787ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-148158 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-148158 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ delete  │ -p download-only-148158                                                                                                                                                   │ download-only-148158 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:31 UTC │
	│ start   │ -o=json --download-only -p download-only-948757 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-948757 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:31:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:31:29.619164  586763 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:31:29.619336  586763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:29.619363  586763 out.go:374] Setting ErrFile to fd 2...
	I1115 10:31:29.619382  586763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:31:29.619654  586763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:31:29.620079  586763 out.go:368] Setting JSON to true
	I1115 10:31:29.620978  586763 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8041,"bootTime":1763194649,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:31:29.621070  586763 start.go:143] virtualization:  
	I1115 10:31:29.624516  586763 out.go:99] [download-only-948757] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:31:29.624723  586763 notify.go:221] Checking for updates...
	I1115 10:31:29.627706  586763 out.go:171] MINIKUBE_LOCATION=21894
	I1115 10:31:29.630721  586763 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:31:29.633720  586763 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:31:29.636657  586763 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:31:29.639642  586763 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1115 10:31:29.645919  586763 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 10:31:29.646181  586763 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:31:29.674363  586763 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:31:29.674473  586763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:29.735673  586763 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-15 10:31:29.726593805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:29.735778  586763 docker.go:319] overlay module found
	I1115 10:31:29.738703  586763 out.go:99] Using the docker driver based on user configuration
	I1115 10:31:29.738747  586763 start.go:309] selected driver: docker
	I1115 10:31:29.738762  586763 start.go:930] validating driver "docker" against <nil>
	I1115 10:31:29.738869  586763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:31:29.804922  586763 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-15 10:31:29.794723686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:31:29.805079  586763 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:31:29.805347  586763 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1115 10:31:29.805499  586763 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 10:31:29.808648  586763 out.go:171] Using Docker driver with root privileges
	I1115 10:31:29.811499  586763 cni.go:84] Creating CNI manager for ""
	I1115 10:31:29.811570  586763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:31:29.811584  586763 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:31:29.811666  586763 start.go:353] cluster config:
	{Name:download-only-948757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-948757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:31:29.814733  586763 out.go:99] Starting "download-only-948757" primary control-plane node in "download-only-948757" cluster
	I1115 10:31:29.814766  586763 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:31:29.817689  586763 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:31:29.817773  586763 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:31:29.817858  586763 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:31:29.833913  586763 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 10:31:29.834042  586763 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 10:31:29.834077  586763 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 10:31:29.834085  586763 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 10:31:29.834094  586763 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 10:31:29.869036  586763 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:31:29.869064  586763 cache.go:65] Caching tarball of preloaded images
	I1115 10:31:29.869290  586763 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:31:29.872468  586763 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1115 10:31:29.872518  586763 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1115 10:31:29.959437  586763 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1115 10:31:29.959492  586763 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:31:34.039006  586763 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:31:34.039387  586763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/download-only-948757/config.json ...
	I1115 10:31:34.039421  586763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/download-only-948757/config.json: {Name:mkd1688913c2b1fdb5ae6d55b54333f3136c1658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:34.039612  586763 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:31:34.039783  586763 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-948757 host does not exist
	  To start a cluster, run: "minikube start -p download-only-948757"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-948757
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1115 10:31:36.522454  586561 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-014145 --alsologtostderr --binary-mirror http://127.0.0.1:33887 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-014145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-014145
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-800763
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-800763: exit status 85 (73.731269ms)

                                                
                                                
-- stdout --
	* Profile "addons-800763" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-800763"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-800763
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-800763: exit status 85 (83.474845ms)

                                                
                                                
-- stdout --
	* Profile "addons-800763" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-800763"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (162.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-800763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-800763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m42.075863031s)
--- PASS: TestAddons/Setup (162.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-800763 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-800763 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-800763 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-800763 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4586036d-8a43-480f-b6fc-9fa267e5a0d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4586036d-8a43-480f-b6fc-9fa267e5a0d7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003079422s
addons_test.go:694: (dbg) Run:  kubectl --context addons-800763 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-800763 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-800763 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-800763 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-800763
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-800763: (12.109834753s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-800763
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-800763
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-800763
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (43.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-303284 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-303284 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (40.85461048s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-303284 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-303284 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-303284 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-303284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-303284
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-303284: (2.226681334s)
--- PASS: TestCertOptions (43.88s)

                                                
                                    
x
+
TestCertExpiration (253.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-636406 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-636406 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.278404412s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1115 11:46:22.372700  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-636406 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (28.221246738s)
helpers_test.go:175: Cleaning up "cert-expiration-636406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-636406
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-636406: (2.526102082s)
--- PASS: TestCertExpiration (253.03s)

                                                
                                    
x
+
TestForceSystemdFlag (38.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-422723 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1115 11:41:22.372357  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-422723 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.232751937s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-422723 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-422723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-422723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-422723: (2.890674628s)
--- PASS: TestForceSystemdFlag (38.52s)

                                                
                                    
x
+
TestForceSystemdEnv (43.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-386707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-386707 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.221348619s)
helpers_test.go:175: Cleaning up "force-systemd-env-386707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-386707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-386707: (5.379534544s)
--- PASS: TestForceSystemdEnv (43.60s)

                                                
                                    
x
+
TestErrorSpam/setup (33.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-313030 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-313030 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-313030 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-313030 --driver=docker  --container-runtime=crio: (33.442199888s)
--- PASS: TestErrorSpam/setup (33.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (7.05s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause: exit status 80 (2.494070112s)

                                                
                                                
-- stdout --
	* Pausing node nospam-313030 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause: exit status 80 (2.33482399s)

                                                
                                                
-- stdout --
	* Pausing node nospam-313030 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause: exit status 80 (2.221790637s)

                                                
                                                
-- stdout --
	* Pausing node nospam-313030 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.05s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.17s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause: exit status 80 (1.739354599s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-313030 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause: exit status 80 (1.83457599s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-313030 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause: exit status 80 (1.591641216s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-313030 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.17s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 stop: (1.304796833s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-313030 --log_dir /tmp/nospam-313030 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21894-584713/.minikube/files/etc/test/nested/copy/586561/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-385299 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1115 10:39:20.134694  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:20.141322  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:20.152713  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:20.174116  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:20.215612  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:20.297074  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:20.458669  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:20.780477  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:21.422274  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:22.703895  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:25.265993  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:30.387393  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:40.628672  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-385299 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.44452011s)
--- PASS: TestFunctional/serial/StartWithProxy (82.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1115 10:39:57.116275  586561 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-385299 --alsologtostderr -v=8
E1115 10:40:01.110184  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-385299 --alsologtostderr -v=8: (27.710692391s)
functional_test.go:678: soft start took 27.711187527s for "functional-385299" cluster.
I1115 10:40:24.827281  586561 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-385299 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 cache add registry.k8s.io/pause:3.1: (1.226002422s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 cache add registry.k8s.io/pause:3.3: (1.196994239s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 cache add registry.k8s.io/pause:latest: (1.101508742s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-385299 /tmp/TestFunctionalserialCacheCmdcacheadd_local2623596762/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cache add minikube-local-cache-test:functional-385299
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cache delete minikube-local-cache-test:functional-385299
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-385299
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.005295ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 kubectl -- --context functional-385299 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-385299 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-385299 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1115 10:40:42.073073  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-385299 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.03010942s)
functional_test.go:776: restart took 41.030206266s for "functional-385299" cluster.
I1115 10:41:13.411881  586561 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-385299 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 logs: (1.452371165s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 logs --file /tmp/TestFunctionalserialLogsFileCmd2333914275/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 logs --file /tmp/TestFunctionalserialLogsFileCmd2333914275/001/logs.txt: (1.566642461s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-385299 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-385299
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-385299: exit status 115 (406.348824ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31279 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-385299 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 config get cpus: exit status 14 (80.505947ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 config get cpus: exit status 14 (71.094916ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-385299 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-385299 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 612758: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-385299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-385299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (186.368209ms)

                                                
                                                
-- stdout --
	* [functional-385299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:51:48.550855  612292 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:51:48.551055  612292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:51:48.551082  612292 out.go:374] Setting ErrFile to fd 2...
	I1115 10:51:48.551101  612292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:51:48.551374  612292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:51:48.551804  612292 out.go:368] Setting JSON to false
	I1115 10:51:48.552753  612292 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9259,"bootTime":1763194649,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:51:48.552896  612292 start.go:143] virtualization:  
	I1115 10:51:48.556407  612292 out.go:179] * [functional-385299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:51:48.559537  612292 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:51:48.559614  612292 notify.go:221] Checking for updates...
	I1115 10:51:48.563483  612292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:51:48.566351  612292 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:51:48.569317  612292 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:51:48.572238  612292 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:51:48.575158  612292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:51:48.578520  612292 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:51:48.579091  612292 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:51:48.609006  612292 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:51:48.609116  612292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:51:48.666267  612292 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:51:48.656935598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:51:48.666428  612292 docker.go:319] overlay module found
	I1115 10:51:48.669581  612292 out.go:179] * Using the docker driver based on existing profile
	I1115 10:51:48.672459  612292 start.go:309] selected driver: docker
	I1115 10:51:48.672477  612292 start.go:930] validating driver "docker" against &{Name:functional-385299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-385299 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:51:48.672570  612292 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:51:48.676299  612292 out.go:203] 
	W1115 10:51:48.679229  612292 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1115 10:51:48.681995  612292 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-385299 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-385299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-385299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (200.375804ms)

                                                
                                                
-- stdout --
	* [functional-385299] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:51:48.359882  612244 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:51:48.360022  612244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:51:48.360033  612244 out.go:374] Setting ErrFile to fd 2...
	I1115 10:51:48.360038  612244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:51:48.360425  612244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:51:48.360828  612244 out.go:368] Setting JSON to false
	I1115 10:51:48.361748  612244 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9259,"bootTime":1763194649,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 10:51:48.361821  612244 start.go:143] virtualization:  
	I1115 10:51:48.365626  612244 out.go:179] * [functional-385299] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1115 10:51:48.368609  612244 notify.go:221] Checking for updates...
	I1115 10:51:48.369532  612244 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:51:48.372542  612244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:51:48.375325  612244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 10:51:48.378090  612244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 10:51:48.381044  612244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:51:48.383946  612244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:51:48.387402  612244 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:51:48.387981  612244 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:51:48.419334  612244 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:51:48.419459  612244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:51:48.479986  612244 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:51:48.469767642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:51:48.480098  612244 docker.go:319] overlay module found
	I1115 10:51:48.483277  612244 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 10:51:48.486202  612244 start.go:309] selected driver: docker
	I1115 10:51:48.486243  612244 start.go:930] validating driver "docker" against &{Name:functional-385299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-385299 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:51:48.486338  612244 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:51:48.489844  612244 out.go:203] 
	W1115 10:51:48.492715  612244 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 10:51:48.495450  612244 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [90f0bcb7-8b44-40de-bee1-b8485a3c1b64] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004346555s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-385299 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-385299 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-385299 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-385299 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ff8d7f5d-3334-4325-8720-461fe74c7d29] Pending
helpers_test.go:352: "sp-pod" [ff8d7f5d-3334-4325-8720-461fe74c7d29] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ff8d7f5d-3334-4325-8720-461fe74c7d29] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003686493s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-385299 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-385299 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-385299 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a9f8c85e-4c20-458b-8c7a-a3002f17c4ff] Pending
helpers_test.go:352: "sp-pod" [a9f8c85e-4c20-458b-8c7a-a3002f17c4ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a9f8c85e-4c20-458b-8c7a-a3002f17c4ff] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008106375s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-385299 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh -n functional-385299 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cp functional-385299:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1483635294/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh -n functional-385299 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh -n functional-385299 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/586561/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo cat /etc/test/nested/copy/586561/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/586561.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo cat /etc/ssl/certs/586561.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/586561.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo cat /usr/share/ca-certificates/586561.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5865612.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo cat /etc/ssl/certs/5865612.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5865612.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo cat /usr/share/ca-certificates/5865612.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-385299 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 ssh "sudo systemctl is-active docker": exit status 1 (347.887632ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 ssh "sudo systemctl is-active containerd": exit status 1 (363.039223ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-385299 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-385299 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-385299 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-385299 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 608771: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-385299 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-385299 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [3a9f1fbe-92ca-4344-8676-91e4479d2fa7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [3a9f1fbe-92ca-4344-8676-91e4479d2fa7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003641464s
I1115 10:41:30.763576  586561 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-385299 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.139.247 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-385299 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "371.355932ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "53.482396ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "360.044802ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.574446ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdany-port3839456273/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763203896079388245" to /tmp/TestFunctionalparallelMountCmdany-port3839456273/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763203896079388245" to /tmp/TestFunctionalparallelMountCmdany-port3839456273/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763203896079388245" to /tmp/TestFunctionalparallelMountCmdany-port3839456273/001/test-1763203896079388245
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.212491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 10:51:36.399874  586561 retry.go:31] will retry after 489.543661ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 15 10:51 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 15 10:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 15 10:51 test-1763203896079388245
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh cat /mount-9p/test-1763203896079388245
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-385299 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [be36adf5-b580-4178-a0dd-1d34c289e22e] Pending
helpers_test.go:352: "busybox-mount" [be36adf5-b580-4178-a0dd-1d34c289e22e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [be36adf5-b580-4178-a0dd-1d34c289e22e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [be36adf5-b580-4178-a0dd-1d34c289e22e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00588088s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-385299 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdany-port3839456273/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdspecific-port3859615684/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.492566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 10:51:44.347191  586561 retry.go:31] will retry after 367.122364ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdspecific-port3859615684/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 ssh "sudo umount -f /mount-9p": exit status 1 (289.429441ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-385299 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdspecific-port3859615684/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup658652849/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup658652849/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup658652849/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-385299 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup658652849/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup658652849/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-385299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup658652849/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 service list -o json: (1.437298746s)
functional_test.go:1504: Took "1.437383333s" to run "out/minikube-linux-arm64 -p functional-385299 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-385299 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-385299 image ls --format short --alsologtostderr:
I1115 10:52:04.371915  614995 out.go:360] Setting OutFile to fd 1 ...
I1115 10:52:04.372075  614995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:04.372094  614995 out.go:374] Setting ErrFile to fd 2...
I1115 10:52:04.372117  614995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:04.376908  614995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
I1115 10:52:04.378114  614995 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:04.378319  614995 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:04.379110  614995 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
I1115 10:52:04.402984  614995 ssh_runner.go:195] Run: systemctl --version
I1115 10:52:04.403042  614995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
I1115 10:52:04.426561  614995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
I1115 10:52:04.536204  614995 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-385299 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ 2d5a8f08b76da │ 176MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-385299 image ls --format table --alsologtostderr:
I1115 10:52:05.034419  615177 out.go:360] Setting OutFile to fd 1 ...
I1115 10:52:05.035022  615177 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:05.035064  615177 out.go:374] Setting ErrFile to fd 2...
I1115 10:52:05.035082  615177 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:05.035407  615177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
I1115 10:52:05.036194  615177 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:05.036376  615177 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:05.036906  615177 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
I1115 10:52:05.063525  615177 ssh_runner.go:195] Run: systemctl --version
I1115 10:52:05.063586  615177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
I1115 10:52:05.086501  615177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
I1115 10:52:05.198838  615177 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-385299 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79
645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io
/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33"],"repoTags":["docker.io/l
ibrary/nginx:latest"],"size":"176006678"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k
8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef
4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-385299 image ls --format json --alsologtostderr:
I1115 10:52:04.450764  615009 out.go:360] Setting OutFile to fd 1 ...
I1115 10:52:04.450917  615009 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:04.450924  615009 out.go:374] Setting ErrFile to fd 2...
I1115 10:52:04.450927  615009 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:04.451199  615009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
I1115 10:52:04.451879  615009 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:04.451983  615009 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:04.452466  615009 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
I1115 10:52:04.475487  615009 ssh_runner.go:195] Run: systemctl --version
I1115 10:52:04.475552  615009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
I1115 10:52:04.497221  615009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
I1115 10:52:04.604486  615009 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-385299 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33
repoTags:
- docker.io/library/nginx:latest
size: "176006678"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-385299 image ls --format yaml --alsologtostderr:
I1115 10:52:04.716785  615087 out.go:360] Setting OutFile to fd 1 ...
I1115 10:52:04.719147  615087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:04.719204  615087 out.go:374] Setting ErrFile to fd 2...
I1115 10:52:04.719227  615087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:04.719542  615087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
I1115 10:52:04.720209  615087 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:04.720667  615087 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:04.721727  615087 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
I1115 10:52:04.768106  615087 ssh_runner.go:195] Run: systemctl --version
I1115 10:52:04.768157  615087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
I1115 10:52:04.801324  615087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
I1115 10:52:04.907403  615087 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-385299 ssh pgrep buildkitd: exit status 1 (363.665913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image build -t localhost/my-image:functional-385299 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-385299 image build -t localhost/my-image:functional-385299 testdata/build --alsologtostderr: (3.541469121s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-385299 image build -t localhost/my-image:functional-385299 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d5e5053e91c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-385299
--> a12f97760d9
Successfully tagged localhost/my-image:functional-385299
a12f97760d9a2fa3eb52a91a9a7f62ccfbde06b71296d6c488488dcb7e1ee023
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-385299 image build -t localhost/my-image:functional-385299 testdata/build --alsologtostderr:
I1115 10:52:05.023476  615173 out.go:360] Setting OutFile to fd 1 ...
I1115 10:52:05.025984  615173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:05.026008  615173 out.go:374] Setting ErrFile to fd 2...
I1115 10:52:05.026016  615173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 10:52:05.026328  615173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
I1115 10:52:05.027096  615173 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:05.027883  615173 config.go:182] Loaded profile config "functional-385299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 10:52:05.028521  615173 cli_runner.go:164] Run: docker container inspect functional-385299 --format={{.State.Status}}
I1115 10:52:05.055313  615173 ssh_runner.go:195] Run: systemctl --version
I1115 10:52:05.055385  615173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-385299
I1115 10:52:05.082549  615173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/functional-385299/id_rsa Username:docker}
I1115 10:52:05.197008  615173 build_images.go:162] Building image from path: /tmp/build.1616082011.tar
I1115 10:52:05.197086  615173 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1115 10:52:05.206668  615173 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1616082011.tar
I1115 10:52:05.212043  615173 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1616082011.tar: stat -c "%s %y" /var/lib/minikube/build/build.1616082011.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1616082011.tar': No such file or directory
I1115 10:52:05.212078  615173 ssh_runner.go:362] scp /tmp/build.1616082011.tar --> /var/lib/minikube/build/build.1616082011.tar (3072 bytes)
I1115 10:52:05.240654  615173 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1616082011
I1115 10:52:05.248777  615173 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1616082011 -xf /var/lib/minikube/build/build.1616082011.tar
I1115 10:52:05.257621  615173 crio.go:315] Building image: /var/lib/minikube/build/build.1616082011
I1115 10:52:05.257689  615173 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-385299 /var/lib/minikube/build/build.1616082011 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1115 10:52:08.456043  615173 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-385299 /var/lib/minikube/build/build.1616082011 --cgroup-manager=cgroupfs: (3.198327325s)
I1115 10:52:08.456145  615173 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1616082011
I1115 10:52:08.464254  615173 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1616082011.tar
I1115 10:52:08.471779  615173 build_images.go:218] Built localhost/my-image:functional-385299 from /tmp/build.1616082011.tar
I1115 10:52:08.471810  615173 build_images.go:134] succeeded building to: functional-385299
I1115 10:52:08.471815  615173 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-385299
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image rm kicbase/echo-server:functional-385299 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-385299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-385299
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-385299
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-385299
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (217.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1115 10:54:20.129256  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:55:43.197811  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m36.954292112s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (217.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (44.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 kubectl -- rollout status deployment/busybox: (5.241365565s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
I1115 10:55:55.158165  586561 retry.go:31] will retry after 860.117583ms: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
I1115 10:55:56.202999  586561 retry.go:31] will retry after 899.936976ms: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
I1115 10:55:57.262812  586561 retry.go:31] will retry after 1.157196753s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
I1115 10:55:58.586479  586561 retry.go:31] will retry after 3.94501731s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
I1115 10:56:02.706300  586561 retry.go:31] will retry after 6.681446652s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
I1115 10:56:09.572066  586561 retry.go:31] will retry after 9.649868719s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
I1115 10:56:19.418213  586561 retry.go:31] will retry after 11.687586345s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2 10.244.0.4 10.244.2.2'\n\n-- /stdout --"
E1115 10:56:22.372558  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:22.378910  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:22.390262  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:22.411684  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:22.453163  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:22.534551  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:22.696020  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:23.017787  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:23.659858  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:24.941219  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:56:27.503164  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-5xw75 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vddcm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vk6xz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-5xw75 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vddcm -- nslookup kubernetes.default
E1115 10:56:32.625071  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vk6xz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-5xw75 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vddcm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vk6xz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (44.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-5xw75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-5xw75 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vddcm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vddcm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vk6xz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 kubectl -- exec busybox-7b57f96db7-vk6xz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 node add --alsologtostderr -v 5
E1115 10:56:42.867688  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:57:03.349179  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 node add --alsologtostderr -v 5: (59.025879009s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: (1.049679715s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-439113 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.068271256s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 status --output json --alsologtostderr -v 5: (1.012735274s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp testdata/cp-test.txt ha-439113:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113_ha-439113-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test_ha-439113_ha-439113-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113_ha-439113-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test_ha-439113_ha-439113-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113:/home/docker/cp-test.txt ha-439113-m04:/home/docker/cp-test_ha-439113_ha-439113-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test_ha-439113_ha-439113-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp testdata/cp-test.txt ha-439113-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m02:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m02_ha-439113.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test.txt"
E1115 10:57:44.310754  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test_ha-439113-m02_ha-439113.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m02:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113-m02_ha-439113-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test_ha-439113-m02_ha-439113-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m02:/home/docker/cp-test.txt ha-439113-m04:/home/docker/cp-test_ha-439113-m02_ha-439113-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test_ha-439113-m02_ha-439113-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp testdata/cp-test.txt ha-439113-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m03_ha-439113.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m03_ha-439113-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m03:/home/docker/cp-test.txt ha-439113-m04:/home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test_ha-439113-m03_ha-439113-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp testdata/cp-test.txt ha-439113-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1077460994/001/cp-test_ha-439113-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113:/home/docker/cp-test_ha-439113-m04_ha-439113.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113 "sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m02:/home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m02 "sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 cp ha-439113-m04:/home/docker/cp-test.txt ha-439113-m03:/home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 ssh -n ha-439113-m03 "sudo cat /home/docker/cp-test_ha-439113-m04_ha-439113-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 node stop m02 --alsologtostderr -v 5: (12.06334333s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 7 (802.93484ms)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-439113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:58:08.810885  630097 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:58:08.810993  630097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:58:08.811004  630097 out.go:374] Setting ErrFile to fd 2...
	I1115 10:58:08.811009  630097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:58:08.811292  630097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 10:58:08.811470  630097 out.go:368] Setting JSON to false
	I1115 10:58:08.811503  630097 mustload.go:66] Loading cluster: ha-439113
	I1115 10:58:08.811892  630097 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:58:08.811908  630097 status.go:174] checking status of ha-439113 ...
	I1115 10:58:08.812424  630097 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 10:58:08.812653  630097 notify.go:221] Checking for updates...
	I1115 10:58:08.831459  630097 status.go:371] ha-439113 host status = "Running" (err=<nil>)
	I1115 10:58:08.831482  630097 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:58:08.831812  630097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113
	I1115 10:58:08.858123  630097 host.go:66] Checking if "ha-439113" exists ...
	I1115 10:58:08.858433  630097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:58:08.858492  630097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113
	I1115 10:58:08.878302  630097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113/id_rsa Username:docker}
	I1115 10:58:08.986839  630097 ssh_runner.go:195] Run: systemctl --version
	I1115 10:58:08.993663  630097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:58:09.015889  630097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:58:09.082711  630097 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-15 10:58:09.072553621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:58:09.083299  630097 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 10:58:09.083339  630097 api_server.go:166] Checking apiserver status ...
	I1115 10:58:09.083385  630097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:58:09.095789  630097 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1115 10:58:09.104692  630097 api_server.go:182] apiserver freezer: "13:freezer:/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63"
	I1115 10:58:09.104768  630097 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d546a4fc19d88603cf6ea774cdce01f79d33b54c6348a418a6435e4ee0bc05cc/crio/crio-07ac2a5381c760250aee2f3852ea559d9d9d055b76df9ddebc749e4923ac9c63/freezer.state
	I1115 10:58:09.112727  630097 api_server.go:204] freezer state: "THAWED"
	I1115 10:58:09.112758  630097 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 10:58:09.121608  630097 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 10:58:09.121638  630097 status.go:463] ha-439113 apiserver status = Running (err=<nil>)
	I1115 10:58:09.121650  630097 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:58:09.121666  630097 status.go:174] checking status of ha-439113-m02 ...
	I1115 10:58:09.121988  630097 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 10:58:09.140200  630097 status.go:371] ha-439113-m02 host status = "Stopped" (err=<nil>)
	I1115 10:58:09.140223  630097 status.go:384] host is not running, skipping remaining checks
	I1115 10:58:09.140231  630097 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:58:09.140251  630097 status.go:174] checking status of ha-439113-m03 ...
	I1115 10:58:09.140566  630097 cli_runner.go:164] Run: docker container inspect ha-439113-m03 --format={{.State.Status}}
	I1115 10:58:09.158876  630097 status.go:371] ha-439113-m03 host status = "Running" (err=<nil>)
	I1115 10:58:09.158920  630097 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 10:58:09.159295  630097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m03
	I1115 10:58:09.183317  630097 host.go:66] Checking if "ha-439113-m03" exists ...
	I1115 10:58:09.183622  630097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:58:09.183670  630097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m03
	I1115 10:58:09.206028  630097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m03/id_rsa Username:docker}
	I1115 10:58:09.310530  630097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:58:09.323834  630097 kubeconfig.go:125] found "ha-439113" server: "https://192.168.49.254:8443"
	I1115 10:58:09.323866  630097 api_server.go:166] Checking apiserver status ...
	I1115 10:58:09.323938  630097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:58:09.335631  630097 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	I1115 10:58:09.344853  630097 api_server.go:182] apiserver freezer: "13:freezer:/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585"
	I1115 10:58:09.345038  630097 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/42a94bae96b2f0d083b54f8a39baaaa72190ab363c39f8e591341005ef452561/crio/crio-895c7545a26e02c3fefad5a9b0d1ab19a6c01e9f679bab3540894df441900585/freezer.state
	I1115 10:58:09.355152  630097 api_server.go:204] freezer state: "THAWED"
	I1115 10:58:09.355179  630097 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 10:58:09.363444  630097 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 10:58:09.363470  630097 status.go:463] ha-439113-m03 apiserver status = Running (err=<nil>)
	I1115 10:58:09.363480  630097 status.go:176] ha-439113-m03 status: &{Name:ha-439113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:58:09.363495  630097 status.go:174] checking status of ha-439113-m04 ...
	I1115 10:58:09.363797  630097 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 10:58:09.383650  630097 status.go:371] ha-439113-m04 host status = "Running" (err=<nil>)
	I1115 10:58:09.383676  630097 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 10:58:09.383970  630097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439113-m04
	I1115 10:58:09.400772  630097 host.go:66] Checking if "ha-439113-m04" exists ...
	I1115 10:58:09.401172  630097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:58:09.401219  630097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439113-m04
	I1115 10:58:09.418989  630097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/ha-439113-m04/id_rsa Username:docker}
	I1115 10:58:09.530291  630097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:58:09.545597  630097 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 stop --alsologtostderr -v 5: (37.136407479s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 start --wait true --alsologtostderr -v 5: (1m38.912094191s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 node delete m03 --alsologtostderr -v 5
E1115 11:09:20.129840  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 node delete m03 --alsologtostderr -v 5: (10.776050938s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 stop --alsologtostderr -v 5: (36.418168542s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: exit status 7 (130.480286ms)

                                                
                                                
-- stdout --
	ha-439113
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-439113-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-439113-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:10:00.958578  644385 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:10:00.958781  644385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:00.958811  644385 out.go:374] Setting ErrFile to fd 2...
	I1115 11:10:00.958831  644385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:10:00.959132  644385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:10:00.959407  644385 out.go:368] Setting JSON to false
	I1115 11:10:00.959490  644385 mustload.go:66] Loading cluster: ha-439113
	I1115 11:10:00.959554  644385 notify.go:221] Checking for updates...
	I1115 11:10:00.960734  644385 config.go:182] Loaded profile config "ha-439113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:10:00.960796  644385 status.go:174] checking status of ha-439113 ...
	I1115 11:10:00.961579  644385 cli_runner.go:164] Run: docker container inspect ha-439113 --format={{.State.Status}}
	I1115 11:10:00.982179  644385 status.go:371] ha-439113 host status = "Stopped" (err=<nil>)
	I1115 11:10:00.982202  644385 status.go:384] host is not running, skipping remaining checks
	I1115 11:10:00.982209  644385 status.go:176] ha-439113 status: &{Name:ha-439113 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:10:00.982255  644385 status.go:174] checking status of ha-439113-m02 ...
	I1115 11:10:00.982573  644385 cli_runner.go:164] Run: docker container inspect ha-439113-m02 --format={{.State.Status}}
	I1115 11:10:01.014815  644385 status.go:371] ha-439113-m02 host status = "Stopped" (err=<nil>)
	I1115 11:10:01.014840  644385 status.go:384] host is not running, skipping remaining checks
	I1115 11:10:01.014849  644385 status.go:176] ha-439113-m02 status: &{Name:ha-439113-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:10:01.014871  644385 status.go:174] checking status of ha-439113-m04 ...
	I1115 11:10:01.015257  644385 cli_runner.go:164] Run: docker container inspect ha-439113-m04 --format={{.State.Status}}
	I1115 11:10:01.034621  644385 status.go:371] ha-439113-m04 host status = "Stopped" (err=<nil>)
	I1115 11:10:01.034645  644385 status.go:384] host is not running, skipping remaining checks
	I1115 11:10:01.034652  644385 status.go:176] ha-439113-m04 status: &{Name:ha-439113-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 node add --control-plane --alsologtostderr -v 5
E1115 11:16:22.372839  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 node add --control-plane --alsologtostderr -v 5: (1m18.143957876s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-439113 status --alsologtostderr -v 5: (1.161362269s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.31s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-979835 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-979835 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (51.046439376s)
--- PASS: TestJSONOutput/start/Command (51.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-979835 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-979835 --output=json --user=testUser: (5.78784249s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-937349 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-937349 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.91154ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c180492-6c9b-45d6-9564-6b019a2eb34f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-937349] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fdbef6b-1ec8-419e-9bdb-3dff1c491e36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21894"}}
	{"specversion":"1.0","id":"b4e12a6b-9ad0-4b77-b070-48d7167d5fd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2df97d1e-c492-46e4-ba1f-20c20df029dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig"}}
	{"specversion":"1.0","id":"47560b7e-ad91-48ec-aa4a-10d774304ef1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube"}}
	{"specversion":"1.0","id":"fd5e21e0-471b-4092-b028-c2a223188585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dc1e8f5b-6169-4a85-ae7f-4842849bc9a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b252d348-dd98-4be6-ac79-c36ec54c364c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-937349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-937349
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-274859 --network=
E1115 11:19:20.133000  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-274859 --network=: (36.764345386s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-274859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-274859
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-274859: (2.280564128s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-477600 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-477600 --network=bridge: (36.323778719s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-477600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-477600
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-477600: (2.057819342s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.41s)

                                                
                                    
x
+
TestKicExistingNetwork (38.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1115 11:20:09.165204  586561 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1115 11:20:09.181828  586561 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1115 11:20:09.182799  586561 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1115 11:20:09.182845  586561 cli_runner.go:164] Run: docker network inspect existing-network
W1115 11:20:09.198548  586561 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1115 11:20:09.198580  586561 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1115 11:20:09.198597  586561 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1115 11:20:09.198716  586561 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1115 11:20:09.216918  586561 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-70b4341e5839 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:cf:e4:18:31:11} reservation:<nil>}
I1115 11:20:09.217283  586561 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40002d3ac0}
I1115 11:20:09.217309  586561 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1115 11:20:09.217360  586561 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1115 11:20:09.278277  586561 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-311674 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-311674 --network=existing-network: (35.799620058s)
helpers_test.go:175: Cleaning up "existing-network-311674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-311674
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-311674: (2.264845163s)
I1115 11:20:47.359898  586561 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.22s)

                                                
                                    
x
+
TestKicCustomSubnet (35.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-750801 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-750801 --subnet=192.168.60.0/24: (33.110513174s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-750801 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-750801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-750801
E1115 11:21:22.373047  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-750801: (2.276076971s)
--- PASS: TestKicCustomSubnet (35.42s)

                                                
                                    
x
+
TestKicStaticIP (40.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-477638 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-477638 --static-ip=192.168.200.200: (37.700188771s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-477638 ip
helpers_test.go:175: Cleaning up "static-ip-477638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-477638
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-477638: (2.250530086s)
--- PASS: TestKicStaticIP (40.13s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.04s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-503781 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-503781 --driver=docker  --container-runtime=crio: (32.667437444s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-506226 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-506226 --driver=docker  --container-runtime=crio: (33.740413746s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-503781
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-506226
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-506226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-506226
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-506226: (2.107473091s)
helpers_test.go:175: Cleaning up "first-503781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-503781
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-503781: (2.067137792s)
--- PASS: TestMinikubeProfile (72.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-058586 --memory=3072 --mount-string /tmp/TestMountStartserial2725266672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-058586 --memory=3072 --mount-string /tmp/TestMountStartserial2725266672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.773170519s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-058586 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-060645 --memory=3072 --mount-string /tmp/TestMountStartserial2725266672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-060645 --memory=3072 --mount-string /tmp/TestMountStartserial2725266672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.34655269s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-060645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-058586 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-058586 --alsologtostderr -v=5: (1.761530142s)
--- PASS: TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-060645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-060645
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-060645: (1.308138235s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-060645
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-060645: (7.258505974s)
--- PASS: TestMountStart/serial/RestartStopped (8.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-060645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-805594 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1115 11:24:20.130197  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-805594 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.536976752s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-805594 -- rollout status deployment/busybox: (3.696902727s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-n2t7t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-qr9sl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-n2t7t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-qr9sl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-n2t7t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-qr9sl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-n2t7t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-n2t7t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-qr9sl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-805594 -- exec busybox-7b57f96db7-qr9sl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-805594 -v=5 --alsologtostderr
E1115 11:26:22.373020  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-805594 -v=5 --alsologtostderr: (55.959209816s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-805594 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp testdata/cp-test.txt multinode-805594:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1813324078/001/cp-test_multinode-805594.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594:/home/docker/cp-test.txt multinode-805594-m02:/home/docker/cp-test_multinode-805594_multinode-805594-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m02 "sudo cat /home/docker/cp-test_multinode-805594_multinode-805594-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594:/home/docker/cp-test.txt multinode-805594-m03:/home/docker/cp-test_multinode-805594_multinode-805594-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m03 "sudo cat /home/docker/cp-test_multinode-805594_multinode-805594-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp testdata/cp-test.txt multinode-805594-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1813324078/001/cp-test_multinode-805594-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594-m02:/home/docker/cp-test.txt multinode-805594:/home/docker/cp-test_multinode-805594-m02_multinode-805594.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594 "sudo cat /home/docker/cp-test_multinode-805594-m02_multinode-805594.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594-m02:/home/docker/cp-test.txt multinode-805594-m03:/home/docker/cp-test_multinode-805594-m02_multinode-805594-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m03 "sudo cat /home/docker/cp-test_multinode-805594-m02_multinode-805594-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp testdata/cp-test.txt multinode-805594-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1813324078/001/cp-test_multinode-805594-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594-m03:/home/docker/cp-test.txt multinode-805594:/home/docker/cp-test_multinode-805594-m03_multinode-805594.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594 "sudo cat /home/docker/cp-test_multinode-805594-m03_multinode-805594.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 cp multinode-805594-m03:/home/docker/cp-test.txt multinode-805594-m02:/home/docker/cp-test_multinode-805594-m03_multinode-805594-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 ssh -n multinode-805594-m02 "sudo cat /home/docker/cp-test_multinode-805594-m03_multinode-805594-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-805594 node stop m03: (1.317351639s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-805594 status: exit status 7 (593.451784ms)

                                                
                                                
-- stdout --
	multinode-805594
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-805594-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-805594-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr: exit status 7 (557.400566ms)

                                                
                                                
-- stdout --
	multinode-805594
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-805594-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-805594-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:27:24.087751  696261 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:27:24.087920  696261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:27:24.087928  696261 out.go:374] Setting ErrFile to fd 2...
	I1115 11:27:24.087933  696261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:27:24.088266  696261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:27:24.088508  696261 out.go:368] Setting JSON to false
	I1115 11:27:24.088554  696261 mustload.go:66] Loading cluster: multinode-805594
	I1115 11:27:24.088617  696261 notify.go:221] Checking for updates...
	I1115 11:27:24.090158  696261 config.go:182] Loaded profile config "multinode-805594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:27:24.090265  696261 status.go:174] checking status of multinode-805594 ...
	I1115 11:27:24.091210  696261 cli_runner.go:164] Run: docker container inspect multinode-805594 --format={{.State.Status}}
	I1115 11:27:24.113910  696261 status.go:371] multinode-805594 host status = "Running" (err=<nil>)
	I1115 11:27:24.113937  696261 host.go:66] Checking if "multinode-805594" exists ...
	I1115 11:27:24.114235  696261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-805594
	I1115 11:27:24.138602  696261 host.go:66] Checking if "multinode-805594" exists ...
	I1115 11:27:24.138927  696261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:27:24.138985  696261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-805594
	I1115 11:27:24.161800  696261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33644 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/multinode-805594/id_rsa Username:docker}
	I1115 11:27:24.266714  696261 ssh_runner.go:195] Run: systemctl --version
	I1115 11:27:24.273726  696261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:27:24.286539  696261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:27:24.342802  696261 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 11:27:24.333704287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:27:24.343391  696261 kubeconfig.go:125] found "multinode-805594" server: "https://192.168.67.2:8443"
	I1115 11:27:24.343436  696261 api_server.go:166] Checking apiserver status ...
	I1115 11:27:24.343481  696261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 11:27:24.356166  696261 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	I1115 11:27:24.364296  696261 api_server.go:182] apiserver freezer: "13:freezer:/docker/69fb336785b8601a5951c877a33592e918729192eb161415238e68863f207584/crio/crio-f6072428a3dd4199971487772095bf1a0d50efb601662c084797513bcbaf995e"
	I1115 11:27:24.364373  696261 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/69fb336785b8601a5951c877a33592e918729192eb161415238e68863f207584/crio/crio-f6072428a3dd4199971487772095bf1a0d50efb601662c084797513bcbaf995e/freezer.state
	I1115 11:27:24.372292  696261 api_server.go:204] freezer state: "THAWED"
	I1115 11:27:24.372316  696261 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1115 11:27:24.380676  696261 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1115 11:27:24.380704  696261 status.go:463] multinode-805594 apiserver status = Running (err=<nil>)
	I1115 11:27:24.380716  696261 status.go:176] multinode-805594 status: &{Name:multinode-805594 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:27:24.380732  696261 status.go:174] checking status of multinode-805594-m02 ...
	I1115 11:27:24.381064  696261 cli_runner.go:164] Run: docker container inspect multinode-805594-m02 --format={{.State.Status}}
	I1115 11:27:24.397776  696261 status.go:371] multinode-805594-m02 host status = "Running" (err=<nil>)
	I1115 11:27:24.397803  696261 host.go:66] Checking if "multinode-805594-m02" exists ...
	I1115 11:27:24.398106  696261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-805594-m02
	I1115 11:27:24.415021  696261 host.go:66] Checking if "multinode-805594-m02" exists ...
	I1115 11:27:24.415335  696261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 11:27:24.415386  696261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-805594-m02
	I1115 11:27:24.433362  696261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33649 SSHKeyPath:/home/jenkins/minikube-integration/21894-584713/.minikube/machines/multinode-805594-m02/id_rsa Username:docker}
	I1115 11:27:24.546323  696261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 11:27:24.561722  696261 status.go:176] multinode-805594-m02 status: &{Name:multinode-805594-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:27:24.561754  696261 status.go:174] checking status of multinode-805594-m03 ...
	I1115 11:27:24.562065  696261 cli_runner.go:164] Run: docker container inspect multinode-805594-m03 --format={{.State.Status}}
	I1115 11:27:24.579371  696261 status.go:371] multinode-805594-m03 host status = "Stopped" (err=<nil>)
	I1115 11:27:24.579391  696261 status.go:384] host is not running, skipping remaining checks
	I1115 11:27:24.579398  696261 status.go:176] multinode-805594-m03 status: &{Name:multinode-805594-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-805594 node start m03 -v=5 --alsologtostderr: (7.650342939s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-805594
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-805594
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-805594: (25.159073379s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-805594 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-805594 --wait=true -v=5 --alsologtostderr: (47.456535615s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-805594
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-805594 node delete m03: (4.938866925s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 stop
E1115 11:29:03.201034  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-805594 stop: (23.922459812s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-805594 status: exit status 7 (96.32636ms)

                                                
                                                
-- stdout --
	multinode-805594
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-805594-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr: exit status 7 (91.767593ms)

                                                
                                                
-- stdout --
	multinode-805594
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-805594-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:29:15.497783  704035 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:29:15.497894  704035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:29:15.497902  704035 out.go:374] Setting ErrFile to fd 2...
	I1115 11:29:15.497907  704035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:29:15.498186  704035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:29:15.498368  704035 out.go:368] Setting JSON to false
	I1115 11:29:15.498401  704035 mustload.go:66] Loading cluster: multinode-805594
	I1115 11:29:15.498488  704035 notify.go:221] Checking for updates...
	I1115 11:29:15.498811  704035 config.go:182] Loaded profile config "multinode-805594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:29:15.498829  704035 status.go:174] checking status of multinode-805594 ...
	I1115 11:29:15.499351  704035 cli_runner.go:164] Run: docker container inspect multinode-805594 --format={{.State.Status}}
	I1115 11:29:15.518421  704035 status.go:371] multinode-805594 host status = "Stopped" (err=<nil>)
	I1115 11:29:15.518444  704035 status.go:384] host is not running, skipping remaining checks
	I1115 11:29:15.518451  704035 status.go:176] multinode-805594 status: &{Name:multinode-805594 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 11:29:15.518483  704035 status.go:174] checking status of multinode-805594-m02 ...
	I1115 11:29:15.518801  704035 cli_runner.go:164] Run: docker container inspect multinode-805594-m02 --format={{.State.Status}}
	I1115 11:29:15.544852  704035 status.go:371] multinode-805594-m02 host status = "Stopped" (err=<nil>)
	I1115 11:29:15.544896  704035 status.go:384] host is not running, skipping remaining checks
	I1115 11:29:15.544912  704035 status.go:176] multinode-805594-m02 status: &{Name:multinode-805594-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-805594 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1115 11:29:20.129956  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:29:25.441199  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-805594 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.549330734s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-805594 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-805594
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-805594-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-805594-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.397996ms)

                                                
                                                
-- stdout --
	* [multinode-805594-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-805594-m02' is duplicated with machine name 'multinode-805594-m02' in profile 'multinode-805594'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-805594-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-805594-m03 --driver=docker  --container-runtime=crio: (35.243695507s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-805594
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-805594: exit status 80 (377.81905ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-805594 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-805594-m03 already exists in multinode-805594-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-805594-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-805594-m03: (2.096953728s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.86s)

                                                
                                    
x
+
TestPreload (132.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-855050 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1115 11:31:22.372891  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-855050 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m5.705100656s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-855050 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-855050 image pull gcr.io/k8s-minikube/busybox: (2.338210413s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-855050
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-855050: (5.933999472s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-855050 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-855050 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.757641609s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-855050 image list
helpers_test.go:175: Cleaning up "test-preload-855050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-855050
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-855050: (2.486493377s)
--- PASS: TestPreload (132.47s)

                                                
                                    
x
+
TestScheduledStopUnix (111.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-033941 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-033941 --memory=3072 --driver=docker  --container-runtime=crio: (34.825091116s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-033941 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 11:33:36.439146  718071 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:33:36.439417  718071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:33:36.439446  718071 out.go:374] Setting ErrFile to fd 2...
	I1115 11:33:36.439465  718071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:33:36.439761  718071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:33:36.440079  718071 out.go:368] Setting JSON to false
	I1115 11:33:36.440239  718071 mustload.go:66] Loading cluster: scheduled-stop-033941
	I1115 11:33:36.440633  718071 config.go:182] Loaded profile config "scheduled-stop-033941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:33:36.440761  718071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/config.json ...
	I1115 11:33:36.441014  718071 mustload.go:66] Loading cluster: scheduled-stop-033941
	I1115 11:33:36.441180  718071 config.go:182] Loaded profile config "scheduled-stop-033941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-033941 -n scheduled-stop-033941
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-033941 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 11:33:36.916831  718161 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:33:36.917060  718161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:33:36.917086  718161 out.go:374] Setting ErrFile to fd 2...
	I1115 11:33:36.917106  718161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:33:36.917402  718161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:33:36.917713  718161 out.go:368] Setting JSON to false
	I1115 11:33:36.917922  718161 daemonize_unix.go:73] killing process 718087 as it is an old scheduled stop
	I1115 11:33:36.917994  718161 mustload.go:66] Loading cluster: scheduled-stop-033941
	I1115 11:33:36.918347  718161 config.go:182] Loaded profile config "scheduled-stop-033941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:33:36.918420  718161 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/config.json ...
	I1115 11:33:36.918584  718161 mustload.go:66] Loading cluster: scheduled-stop-033941
	I1115 11:33:36.918714  718161 config.go:182] Loaded profile config "scheduled-stop-033941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 718087 is a zombie
I1115 11:33:36.926874  586561 retry.go:31] will retry after 79.494µs: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.927563  586561 retry.go:31] will retry after 220.23µs: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.932696  586561 retry.go:31] will retry after 176.86µs: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.933872  586561 retry.go:31] will retry after 253.873µs: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.934992  586561 retry.go:31] will retry after 381.512µs: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.936273  586561 retry.go:31] will retry after 1.124734ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.938461  586561 retry.go:31] will retry after 1.210352ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.940649  586561 retry.go:31] will retry after 1.332857ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.942868  586561 retry.go:31] will retry after 3.494168ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.946452  586561 retry.go:31] will retry after 3.261483ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.950642  586561 retry.go:31] will retry after 5.644066ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.956817  586561 retry.go:31] will retry after 9.588416ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.967049  586561 retry.go:31] will retry after 13.454173ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:36.981248  586561 retry.go:31] will retry after 26.472342ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
I1115 11:33:37.008617  586561 retry.go:31] will retry after 27.168183ms: open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-033941 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-033941 -n scheduled-stop-033941
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-033941
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-033941 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 11:34:02.844668  718523 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:34:02.844841  718523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:34:02.844882  718523 out.go:374] Setting ErrFile to fd 2...
	I1115 11:34:02.844888  718523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:34:02.845153  718523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:34:02.845516  718523 out.go:368] Setting JSON to false
	I1115 11:34:02.845611  718523 mustload.go:66] Loading cluster: scheduled-stop-033941
	I1115 11:34:02.845956  718523 config.go:182] Loaded profile config "scheduled-stop-033941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:34:02.846031  718523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/scheduled-stop-033941/config.json ...
	I1115 11:34:02.846215  718523 mustload.go:66] Loading cluster: scheduled-stop-033941
	I1115 11:34:02.846332  718523 config.go:182] Loaded profile config "scheduled-stop-033941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1115 11:34:20.131257  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-033941
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-033941: exit status 7 (76.348442ms)

                                                
                                                
-- stdout --
	scheduled-stop-033941
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-033941 -n scheduled-stop-033941
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-033941 -n scheduled-stop-033941: exit status 7 (75.235122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-033941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-033941
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-033941: (5.093220506s)
--- PASS: TestScheduledStopUnix (111.54s)

                                                
                                    
x
+
TestInsufficientStorage (14.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-154100 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-154100 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.433100763s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"850253dd-68b0-495f-88e5-77bce0b491df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-154100] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b10806d-00b5-4aa6-b370-08c570cb7402","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21894"}}
	{"specversion":"1.0","id":"fe9ebb5e-110b-4254-a585-279f52537cf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dd17cb5d-6dcc-4ccd-a8be-64a478493594","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig"}}
	{"specversion":"1.0","id":"64bf98a6-f3ba-4b60-a00f-1aedd11e5d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube"}}
	{"specversion":"1.0","id":"ce216f85-5842-4a70-acdf-f0bd4031cba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2cc46470-21a9-44ba-a47a-81c46ec33e37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"96ab7113-52d0-4084-b81d-297728a2db6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"637d05f6-837a-4d91-a484-cfc11cd94c4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c46cfb87-cd90-429e-8503-1592edf3ff41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c41ea11-449d-4533-82ea-9e93539d284e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"61868343-21e7-4938-9c5d-9437895d54b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-154100\" primary control-plane node in \"insufficient-storage-154100\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcf0b171-3c17-4ac0-8679-5ad772f2bb52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"375dd201-7c49-4249-8ac2-cb31dfd93c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6381948f-92c7-42e5-a6ce-2c68debeb54a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-154100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-154100 --output=json --layout=cluster: exit status 7 (301.390912ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-154100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-154100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 11:35:04.818383  720263 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-154100" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-154100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-154100 --output=json --layout=cluster: exit status 7 (317.765731ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-154100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-154100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 11:35:05.138728  720330 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-154100" does not appear in /home/jenkins/minikube-integration/21894-584713/kubeconfig
	E1115 11:35:05.149466  720330 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/insufficient-storage-154100/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-154100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-154100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-154100: (1.990862915s)
--- PASS: TestInsufficientStorage (14.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.954632774 start -p running-upgrade-165074 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.954632774 start -p running-upgrade-165074 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.879391315s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-165074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-165074 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.112792322s)
helpers_test.go:175: Cleaning up "running-upgrade-165074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-165074
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-165074: (2.012110077s)
--- PASS: TestRunningBinaryUpgrade (55.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (367.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.528777709s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-436490
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-436490: (3.560917201s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-436490 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-436490 status --format={{.Host}}: exit status 7 (277.57477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.781002771s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-436490 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (130.637833ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-436490] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-436490
	    minikube start -p kubernetes-upgrade-436490 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4364902 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-436490 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-436490 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.057370383s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-436490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-436490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-436490: (2.566258614s)
--- PASS: TestKubernetesUpgrade (367.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2083105805 start -p missing-upgrade-028715 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2083105805 start -p missing-upgrade-028715 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.101515295s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-028715
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-028715
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-028715 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-028715 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.454685142s)
helpers_test.go:175: Cleaning up "missing-upgrade-028715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-028715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-028715: (2.114027689s)
--- PASS: TestMissingContainerUpgrade (117.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-505051 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-505051 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (90.570731ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-505051] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-505051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-505051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.54975588s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-505051 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.43539401s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-505051 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-505051 status -o json: exit status 2 (473.879043ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-505051","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-505051
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-505051: (2.432870968s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-505051 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.911886103s)
--- PASS: TestNoKubernetes/serial/Start (8.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21894-584713/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-505051 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-505051 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.115152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-505051
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-505051: (1.292274515s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-505051 --driver=docker  --container-runtime=crio
E1115 11:36:22.373124  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-505051 --driver=docker  --container-runtime=crio: (7.490077126s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-505051 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-505051 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.6497ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1675803624 start -p stopped-upgrade-484617 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1675803624 start -p stopped-upgrade-484617 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.358155459s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1675803624 -p stopped-upgrade-484617 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1675803624 -p stopped-upgrade-484617 stop: (1.227410551s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-484617 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-484617 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.701905076s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-484617
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-484617: (1.210404812s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (80.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-137857 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1115 11:39:20.130233  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-137857 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.791802882s)
--- PASS: TestPause/serial/Start (80.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-137857 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-137857 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.965903886s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-949287 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-949287 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (330.062009ms)

                                                
                                                
-- stdout --
	* [false-949287] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 11:41:45.004013  757283 out.go:360] Setting OutFile to fd 1 ...
	I1115 11:41:45.004672  757283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:41:45.004692  757283 out.go:374] Setting ErrFile to fd 2...
	I1115 11:41:45.004698  757283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 11:41:45.005061  757283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-584713/.minikube/bin
	I1115 11:41:45.005614  757283 out.go:368] Setting JSON to false
	I1115 11:41:45.006691  757283 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12256,"bootTime":1763194649,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1115 11:41:45.006770  757283 start.go:143] virtualization:  
	I1115 11:41:45.010579  757283 out.go:179] * [false-949287] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 11:41:45.013742  757283 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 11:41:45.013973  757283 notify.go:221] Checking for updates...
	I1115 11:41:45.020785  757283 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 11:41:45.023839  757283 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-584713/kubeconfig
	I1115 11:41:45.026835  757283 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-584713/.minikube
	I1115 11:41:45.030175  757283 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 11:41:45.033078  757283 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 11:41:45.036575  757283 config.go:182] Loaded profile config "kubernetes-upgrade-436490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 11:41:45.036712  757283 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 11:41:45.095453  757283 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 11:41:45.095610  757283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 11:41:45.225591  757283 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 11:41:45.203158748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 11:41:45.225731  757283 docker.go:319] overlay module found
	I1115 11:41:45.229159  757283 out.go:179] * Using the docker driver based on user configuration
	I1115 11:41:45.232181  757283 start.go:309] selected driver: docker
	I1115 11:41:45.232210  757283 start.go:930] validating driver "docker" against <nil>
	I1115 11:41:45.232226  757283 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 11:41:45.235999  757283 out.go:203] 
	W1115 11:41:45.239042  757283 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1115 11:41:45.241941  757283 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-949287 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-949287" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 11:37:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-436490
contexts:
- context:
cluster: kubernetes-upgrade-436490
user: kubernetes-upgrade-436490
name: kubernetes-upgrade-436490
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-436490
user:
client-certificate: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kubernetes-upgrade-436490/client.crt
client-key: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kubernetes-upgrade-436490/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-949287

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-949287"

                                                
                                                
----------------------- debugLogs end: false-949287 [took: 4.528382778s] --------------------------------
helpers_test.go:175: Cleaning up "false-949287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-949287
--- PASS: TestNetworkPlugins/group/false (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1115 11:44:20.130208  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.496531256s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-872969 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e38478c9-e689-4a8a-a576-f61f8d997349] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e38478c9-e689-4a8a-a576-f61f8d997349] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00310763s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-872969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-872969 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-872969 --alsologtostderr -v=3: (12.03415195s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969: exit status 7 (73.536883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-872969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-872969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.99231073s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-872969 -n old-k8s-version-872969
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9xc5k" [4d1ca727-bfad-4baa-95c1-8bdb23a987a4] Running
E1115 11:45:43.203234  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004337736s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9xc5k" [4d1ca727-bfad-4baa-95c1-8bdb23a987a4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003526073s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-872969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-872969 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 11:46:05.445000  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.468205176s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.636804973s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-769461 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c2df1c11-9c1f-46d6-ad9f-04f87ba7c040] Pending
helpers_test.go:352: "busybox" [c2df1c11-9c1f-46d6-ad9f-04f87ba7c040] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c2df1c11-9c1f-46d6-ad9f-04f87ba7c040] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003947041s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-769461 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-769461 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-769461 --alsologtostderr -v=3: (12.033658594s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461: exit status 7 (77.704059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-769461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-769461 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.544821199s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-769461 -n default-k8s-diff-port-769461
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-404149 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3107e5d8-dcc1-42b2-8764-e2ce45e76676] Pending
helpers_test.go:352: "busybox" [3107e5d8-dcc1-42b2-8764-e2ce45e76676] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3107e5d8-dcc1-42b2-8764-e2ce45e76676] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00351955s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-404149 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-404149 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-404149 --alsologtostderr -v=3: (12.018635085s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149: exit status 7 (75.126382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-404149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-404149 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.471768654s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-404149 -n embed-certs-404149
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dt85h" [7753b3d4-a47a-41fd-afe2-c7894ed53956] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003815615s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dt85h" [7753b3d4-a47a-41fd-afe2-c7894ed53956] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003086549s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-769461 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-769461 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 11:49:20.129695  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m6.160716521s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97q22" [2752d856-9de4-44a2-826b-cd233ea4951b] Running
E1115 11:49:26.307814  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:26.314166  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:26.325532  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:26.346828  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:26.388844  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:26.470370  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:26.631737  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:26.953144  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:27.594699  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:49:28.876259  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003541783s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97q22" [2752d856-9de4-44a2-826b-cd233ea4951b] Running
E1115 11:49:31.438436  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005370334s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-404149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-404149 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 11:49:46.801909  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:50:07.283853  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.186444378s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-126380 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [12ccf240-d78b-47c9-923c-0c9e8a54f8d0] Pending
helpers_test.go:352: "busybox" [12ccf240-d78b-47c9-923c-0c9e8a54f8d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [12ccf240-d78b-47c9-923c-0c9e8a54f8d0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.0039438s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-126380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-126380 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-126380 --alsologtostderr -v=3: (12.163963571s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-600818 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-600818 --alsologtostderr -v=3: (1.332280459s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818: exit status 7 (71.518814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-600818 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-600818 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (18.963605565s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-600818 -n newest-cni-600818
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380: exit status 7 (96.051254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-126380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 11:50:48.245225  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-126380 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.179892825s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-126380 -n no-preload-126380
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-600818 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1115 11:51:22.373208  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m23.47017036s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t7kpg" [55ce0ad5-85a0-411a-9874-9b8c8e1b8595] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003409059s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t7kpg" [55ce0ad5-85a0-411a-9874-9b8c8e1b8595] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003415541s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-126380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-126380 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1115 11:52:10.166616  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:21.532780  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:21.539460  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:21.550755  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:21.572122  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:21.613434  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:21.694803  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:21.856242  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:22.177833  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:22.819596  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:52:24.101259  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m26.859295369s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-949287 "pgrep -a kubelet"
E1115 11:52:26.663230  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1115 11:52:26.806612  586561 config.go:182] Loaded profile config "auto-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-949287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7rtj2" [0b3cfad6-95d0-40e3-b6fc-7129a264398e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 11:52:31.785612  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7rtj2" [0b3cfad6-95d0-40e3-b6fc-7129a264398e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004209434s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-949287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1115 11:53:02.508657  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m25.112661574s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-nrnhm" [9061cf4f-7efe-4006-afe8-9c8d48c9be7c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00426718s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-949287 "pgrep -a kubelet"
I1115 11:53:24.789404  586561 config.go:182] Loaded profile config "kindnet-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-949287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h4jts" [c77a92c2-c984-4970-8b37-7f126a41ad17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h4jts" [c77a92c2-c984-4970-8b37-7f126a41ad17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00400064s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-949287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1115 11:54:20.129689  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/addons-800763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.865777025s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fw2jf" [3f8bbfbd-7b74-4df7-9d0e-a5d124bdecc4] Running
E1115 11:54:26.307787  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/old-k8s-version-872969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004108403s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-949287 "pgrep -a kubelet"
I1115 11:54:30.422665  586561 config.go:182] Loaded profile config "calico-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-949287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gcgxk" [31b7894d-6f21-4bb5-8322-4b57807a08e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gcgxk" [31b7894d-6f21-4bb5-8322-4b57807a08e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003469014s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-949287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-949287 "pgrep -a kubelet"
I1115 11:55:01.072244  586561 config.go:182] Loaded profile config "custom-flannel-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-949287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wr8b6" [cc3a3695-5092-461c-b9e2-c210bd49d134] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wr8b6" [cc3a3695-5092-461c-b9e2-c210bd49d134] Running
E1115 11:55:12.524412  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:55:12.530839  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:55:12.542196  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:55:12.571410  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:55:12.612888  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:55:12.694357  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:55:12.856091  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:55:13.178634  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003741446s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m25.117008092s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-949287 exec deployment/netcat -- nslookup kubernetes.default
E1115 11:55:13.820355  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1115 11:55:53.520099  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 11:56:22.372499  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/functional-385299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.153435105s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-949287 "pgrep -a kubelet"
I1115 11:56:31.711941  586561 config.go:182] Loaded profile config "enable-default-cni-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-949287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dwq7h" [7f975103-b862-4200-a798-4b4621d20b2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 11:56:34.482075  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/no-preload-126380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dwq7h" [7f975103-b862-4200-a798-4b4621d20b2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00356793s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7wtmv" [3acc8e12-04e6-42fa-84cb-a204f0ff5f4e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004367675s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-949287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-949287 "pgrep -a kubelet"
I1115 11:56:44.380139  586561 config.go:182] Loaded profile config "flannel-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-949287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-plfff" [c752cefd-47a1-42da-afc1-2b7eff725bd5] Pending
helpers_test.go:352: "netcat-cd4db9dbf-plfff" [c752cefd-47a1-42da-afc1-2b7eff725bd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.003437961s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-949287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1115 11:57:21.533266  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/default-k8s-diff-port-769461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-949287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.427202316s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-949287 "pgrep -a kubelet"
E1115 11:58:20.929156  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1115 11:58:20.967029  586561 config.go:182] Loaded profile config "bridge-949287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-949287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xnz66" [1840c932-e871-407a-832f-11c0abf02708] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 11:58:23.491471  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xnz66" [1840c932-e871-407a-832f-11c0abf02708] Running
E1115 11:58:28.613290  586561 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kindnet-949287/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003824934s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-949287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-949287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.69s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-855751 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-855751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-855751
--- SKIP: TestDownloadOnlyKic (0.69s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-200933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-200933
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-949287 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-949287" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 11:37:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-436490
contexts:
- context:
cluster: kubernetes-upgrade-436490
user: kubernetes-upgrade-436490
name: kubernetes-upgrade-436490
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-436490
user:
client-certificate: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kubernetes-upgrade-436490/client.crt
client-key: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kubernetes-upgrade-436490/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-949287

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-949287"

                                                
                                                
----------------------- debugLogs end: kubenet-949287 [took: 5.104140156s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-949287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-949287
--- SKIP: TestNetworkPlugins/group/kubenet (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-949287 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-949287" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-584713/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 11:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-436490
contexts:
- context:
cluster: kubernetes-upgrade-436490
extensions:
- extension:
last-update: Sat, 15 Nov 2025 11:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-436490
name: kubernetes-upgrade-436490
current-context: kubernetes-upgrade-436490
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-436490
user:
client-certificate: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kubernetes-upgrade-436490/client.crt
client-key: /home/jenkins/minikube-integration/21894-584713/.minikube/profiles/kubernetes-upgrade-436490/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-949287

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-949287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-949287"

                                                
                                                
----------------------- debugLogs end: cilium-949287 [took: 5.828531113s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-949287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-949287
--- SKIP: TestNetworkPlugins/group/cilium (6.06s)

                                                
                                    
Copied to clipboard